Pages

Monday, July 30, 2018

7 Requirements To Launch Personalized Learning Program

So you've decided that you want to get in on the Next Big Thing in education and launch a computer-based personalized learning program. Now you just need to line up the following items.

1. A huge library of materials.

You need teaching materials to suit every single student who logs on, every day, based on the individual student's strengths and weaknesses. After all, you're replacing the old system where some mass produced materials were intended to roughly hit the area mostly occupied by the majority of students. One size mostly fit most. If your instructional program is going to be custom tailored to each student, you will need a massive library of materials that can cover every possible occurrence. And that library will also have to reflect a deep, wide, and rich understanding of the content material (not just the understanding of a programming guy).



2. Really smart Artificial Intelligence software.

We've gotten pretty sloppy with the term "artificial intelligence," slapping it on every algorithm created, no matter how simple that algorithm is. But to run your personalized learning program, you will need the real thing. It will need to be able to analyze student strengths and weaknesses across a broad spectrum of domains and standards, and then it will need to be able to dip into that massive library of materials to select the next activity for that student. Perhaps, if you have a super-duper AI, you can have it actually generate new custom materials so that you don't need the library referenced in #1. But if you have an AI that is really that smart, there are probably places you can be making way more money than in education.



3. An honest-to-God text assessment system.

Companies keep claiming to have software that can grade essays. They have not been correct yet. Measuring the educational progress of the student cannot be accurately accomplished only through objective assessments such as multiple choice questions. There must be open-ended questions that require sentences and paragraphs to answer. That, in turn, requires assessment software that can not just analyze structure, but can tell whether or not the content is accurate and sensible.

4. A good interface.

Students who deal with computer-centered personalized programs invariably talk about the problems of dealing with the interface. Design that is ugly. Windows for typing long answers that don't let you see all of what you've typed. Questions that allow only for certain narrow views of the problem presented. And all this is outside the challenge of expecting six-year-olds to be proficient in the use of a mouse and a screen (heck, today you can find plenty of tech-savvy teens who have never worked with a mouse). You must ensure that your students are learning the content, and not just learning your software.




5. A data collection system.

The simple version of this is to simply use each assignment as a data source to figure out the next assignment. But in boutique personalized education schools like Silicon Valley's AltSchool, you find a whole staff of teachers, techs, and other personnel who are busily capturing and processing data daily. To really personalize education, you need to collect, enter and process a ton of data. It's going to be time-consuming and costly.

6. A data security system.

Once we've collected all that data, we are a rich target for hackers who would like to steal it and for entrepreneurs who would like to rent it. Collecting and storing this much data about children poses some practical concerns, some legal concerns, and certainly some ethical concerns. But once we get past the question of whether we should even have it in the first place, we have to deal with the question of how we will keep it secure.

7. Nerves of steel (and special corner-cutting scissors).

From Rocketship to Summit to the above-mentioned AltSchool, providers of personalized learning have discovered that really doing it is hard and expensive (and therefore marketable to a narrow group of people). They instead switched to a business model in which a scaled-down version of their program can be purchased by any school district. It reminds me of the Project Runway episodes where the designers are supposed to create a fabulous runway look, and then also create a cheaper, simpler knockoff for the ready-to-wear market.

When the costs of your personalized learning program get too great (and your investors get antsy), where will you cut corners? Which of the necessities that you either can't afford or which don't actually exist, will you scale down? And will you have the nerve to market the idealized complete personalized package while you are actually selling something less spectacular? Will you have the guts to sell your vaporware as if it's as solid and real as the leather seats in your Lexus?

Originally posted at Forbes. Check it out: I'm now a regular there on Mondays and Thursdays.     

2 comments:

  1. So glad that the plutocrat readers at Forbes will now have a regular guy to provide a more realistic outlook of what the true challenges are in public education. Good on you, Peter!

    Christine Langhoff

    ReplyDelete
  2. Peter,

    You gotta know this is coming, eh! :-)

    "Measuring the educational progress of the student cannot be accurately accomplished EVER."

    The most misleading concept/term in education is "measuring student achievement" or "measuring student learning". The concept has been misleading educators into deluding themselves that the teaching and learning process can be analyzed/assessed using "scientific" methods which are actually pseudo-scientific at best and at worst a complete bastardization of rationo-logical thinking and language usage.

    There never has been and never will be any "measuring" of the teaching and learning process and what each individual student learns in their schooling. There is and always has been assessing, evaluating, judging of what students learn but never a true "measuring" of it.

    But, but, but, you're trying to tell me that the supposedly august and venerable APA, AERA and/or the NCME have been wrong for more than the last 50 years, disseminating falsehoods and chimeras??
    Who are you to question the authorities in testing???

    Yes, they have been wrong and I (and many others, Wilson, Hoffman etc. . . ) question those authorities and challenge them (or any of you other advocates of the malpractices that are standards and testing) to answer to the following onto-epistemological analysis:

    The TESTS MEASURE NOTHING, quite literally when you realize what is actually happening with them. Richard Phelps, a staunch standardized test proponent (he has written at least two books defending the standardized testing malpractices) in the introduction to “Correcting Fallacies About Educational and Psychological Testing” unwittingly lets the cat out of the bag with this statement:

    “Physical tests, such as those conducted by engineers, can be standardized, of course [why of course of course], but in this volume , we focus on the measurement of latent (i.e., nonobservable) mental, and not physical, traits.” [my addition]

    Notice how he is trying to assert by proximity that educational standardized testing and the testing done by engineers are basically the same, in other words a “truly scientific endeavor”. The same by proximity is not a good rhetorical/debating technique.

    Since there is no agreement on a standard unit of learning, there is no exemplar of that standard unit and there is no measuring device calibrated against said non-existent standard unit, how is it possible to “measure the nonobservable”?

    THE TESTS MEASURE NOTHING for how is it possible to “measure” the nonobservable with a non-existing measuring device that is not calibrated against a non-existing standard unit of learning?????

    PURE LOGICAL INSANITY!

    The basic fallacy of this is the confusing and conflating metrological (metrology is the scientific study of measurement) measuring and measuring that connotes assessing, evaluating and judging. The two meanings are not the same and confusing and conflating them is a very easy way to make it appear that standards and standardized testing are "scientific endeavors"-objective and not subjective like assessing, evaluating and judging.

    And talking about interface, the box I am typing in gives me four lines to see. Just a simple example of how hard it is to get right all of the factors that you discuss.



    ReplyDelete