What’s wrong with letting tech run our schools

0
- ADVERTISEMENT -

Silicon Valley tech moguls are conducting an enormous experiment on the nation’s children. We should not be so trusting that they’ll get it right.

Alphabet unit Google has taken a big role in public education, offering low-cost laptops and free apps. Mark Zuckerberg of Facebook Inc. is investing heavily in educational technology, largely though the Chan Zuckerberg Initiative. Netflix head Reed Hastings has been tinkering with expensive and algorithmic ed-tech tools.

Encouraging as all this may be, the technologists might be getting ahead of themselves, both politically and ethically. Also, there’s not a lot of evidence that what they’re doing works.

Like it or not, education is political. People on opposite sides of the spectrum read very different science books, and can’t seem to agree on fundamental principles. It stands to reason that what we choose to teach our children will vary, depending on our beliefs. That’s to acknowledge, not defend, anti-scientific curricula.

Zuckerberg and Bill Gates learned this the hard way last year when the Ugandan government ordered the closure of 60 schools — part of a network providing highly scripted, low-cost education in Africa — amid allegations that they had been “teaching pornography” and “conveying the gospel of homosexuality” in sex-ed classes. Let’s face it, something similar could easily happen here if tech initiatives expand beyond the apolitical math subjects on which they have so far focused.

Beyond that, there are legitimate reasons to be worried about letting tech companies wield so much influence in the classroom. They tend to offer “free services” in return for access to data, a deal that raises some serious privacy concerns — particularly if you consider that it can involve tracking kids’ every click, keystroke and backspace from kindergarten on.

My oldest son is doing extremely well as a junior in school right now, but he was a late bloomer who didn’t learn to read until third grade. Should that be a part of his permanent record, data that future algorithms could potentially use to assess his suitability for credit or a job? Or what about a kid whose “persistence score” on dynamic, standardized tests waned in 10th grade? Should colleges have access to that information in making their admissions decisions?

These are not far-fetched scenarios. Consider the fate of nonprofit education venture InBloom, which sought to collect and integrate student records in a way that would allow lessons to be customized. The venture shut down a few years ago amid concerns about how sensitive information — including tags identifying students as “tardy” or “autistic” — would be protected from theft and shared with outside vendors.

Google and others are collecting similar data and using it internally to improve their software. Only after some prompting did Google agree to comply with the privacy law known as FERPA, which had been weakened for the purpose of third-party sharing. It’s not clear how the data will ultimately be used, how long the current crop of students will be tracked, or to what extent their futures will depend on their current performance.

Nobody really knows to what educational benefit we are bearing such uncertainties. What kinds of kids will the technological solutions reward? Will they be aimed toward producing future Facebook engineers? How will they serve children in poverty, with disabilities or with different learning styles? As far as I know, there’s no standard audit that would allow us to answer such questions.

WASHINGTON POST

Share

LEAVE A REPLY

Please enter your comment!
Please enter your name here