Why most parents’ kids are from Lake Wobegon

Photo from stage showing Garrison Keillor telling stories
“The Lake Wobegon effect” is a term for the tendency to overestimate one’s abilities relative to others.

Why do parents have such unrealistically high assessments of their students’ academic performance?

That’s a question Michael J. Petrilli asks at EducationNext in a blog post titled Common Confusion.

The Common Core was supposed to be associated with tests that showed more accurately the relationship between stated standards and student performance. As those tests have been used, student test scores have gone done—way down, in many cases—but parent’s reports of how their students are doing remains high.

A study released in May reported 90 percent of parents believe their children are performing at “grade level” or higher in their schoolwork.

In New York State, where I live, about a quarter of students fail to graduate on time. Of those that graduate and go on to college, about a third end up taking at least one remedial course in college. I can tell you  from my college teaching experience, if a student can’t pass the test to escape remedial English, that student hasn’t been at grade level for about eight years.

Chart showing2012 HS graduation rates in New York State and percentage of graduates that are college and career ready.
New York State’s report of college and career readiness of 2012 cohort of students.

Petrilli suggests giving parents more direct information about their kids’ performance on the report results, possibly even offering resources for concerned parents to use.

Peter Greene on his blog Curmudgucation takes issue with Petrilli’s comments, which Greene reads as being about Grade Inflation. Greene argues that if grade inflation exists in K-12 education, it’s allowed to happen because there’s no objective standard for what students should really be getting as a grade.

I find myself in agreement with both men on certain points: in particular with Petrilli on the need to report test results in ways that will make sense to parents, with Greene on the deleterious effects of the commodification of education.

That said, however, I think there is another factor that could be the causing parents’ assessment of their kids’ achievement to be way off:  The teachers could be accurately assessing what students have learned in their class, and the tests could be accurately assessing how well the students’ learning matched the standards but the material being taught and the material being tested may be very different.

I don’t have any hard data as to whether that is the case, but my observation of such things as topics for Twitter chats and for professional development workshops for teachers lead me to believe a great many teachers are focused on teaching such things as a growth mindset and grit, which can be acquired while engaged in activities that require developing those dispositions.

I think today’s educators spend way to much time attempting to teach things that they wouldn’t have to teach if they did a really good job teaching their academic content.

I don’t mean stuffing students with facts.

I mean teaching students to read, write, compute, listen, speak, and think in each of their academic subjects and giving students work that gives them the opportunity to exercise creativity, to be innovative and entrepreneurial, to treat others with respect, to make the world a better place.

To that end, it might not be bad if parents did ask their local school boards what they are doing to make sure teachers are teaching the right things.

Should Standards and Assessments be Piloted?

A call for piloting standards and assessments has been raised by educators around the world who are faced with the problems inherent in moving to outcomes-based learning.  This from David B. Cohen in the US is representative of the sorts of things I’ve heard:

Tweet Let's pilot new standards and assessments

That verb to pilot has a couple of common meanings.  Its most common meaning is to lead or guide, typically in difficult conditions.  That definition doesn’t fit the context of the Tweet. I suspect many of  Cohen’s Twitter followers would say the standards and assessments are the difficult conditions.

The second meaning is probably closer to what Cohen has in mind, but even it is not a good match for the context. To pilot can mean to set a course and see that the vessel arrives at its destination.  That meaning does not suggest that there’s flaw in the vessel, only that it requires a skilled operator.  I doubt that the folks who are opposed to new standards and new assessments would be caught dead suggesting that better quality teachers would have no problem using them.

It seems to me that although Cohen uses pilot as a verb, he wants the word to be understood in its adjectival meaning of testing or experimental, as in the phrase “a pilot program to train monkeys to run cash registers.”

Even assuming Cohen wants a  limited pilot program to test standards and assessments, I still see a problem.

Standards just are

By themselves, the number of people who meet a particular standard doesn’t tell anything about whether a standard is good or bad.

If the carnival ride has a requirement, “people must be 48-inches high to ride the Cyclone,”  having a random sample of 1,000 people line up against the standard won’t tell whether they standard is good or bad.  The standard might be set too high for the ride to be profitable for the operator or too low to allow people to ride in relative safety, but those determinations cannot be made just on the basis of the percentage of people who meet the standard.

Educational standards are supposedly the sort everyone can meet, while standards for joining the Rockettes or the Navy Seals are intended to be those which only a few can meet.  Both types of standards can be inappropriate for a multitude of reasons.

One of the criticisms of the Common Core State Standards is that some of the grade-level standards are not appropriate to students’ developmental level at that grade. If true (and I think it is), that’s a serious problem.

It is not, however, a problem that’s like to be solved through small-scale experiments. The folks responsible for the overall standards will have to be convinced by seeing lots of data over a few years—with some assistance from experts in child and adolescent development—that the objective needs to be moved to a different grade level.

Assessments are testable

Unlike standards, assessments can and should be tested.  Assessments, however,  are evaluated in terms of how well they measure achievement of the standards.

To a considerable extent, assessments can be tested by small groups of the intended users to get rid of the least valid, least reliable assessments. Of course, if the standards were inappropriate to begin with, the assessments are going to be out of whack, too.

I have some sympathy for teachers who feel they are being forced to work with new standards and assessments without adequate preparation. I’m also willing to grant that first couple years of new standards and new assessments are going to be a pretty tough slog.

However, I believe teachers can work with (and around) new standards and assessments if they put their minds to it.

A workable approach

A District of Columbia ELA teacher who spoke at a webinar I attended recently told about how she implemented the Common Core in her classroom. She chose a few grade-specific standards that she thoroughly agreed with and worked all year teaching in-depth to achieve that standard.

When the standardized test showed her students didn’t do well on those standards, she said that test was not a valid assessment of students’ understanding of those topics.  She knew her students knew that material well.

She did the same things the next year that she’d done the first year.

The second year her students did very well: Between the first and second years, the standardized test was changed so the test items aligned better with the standards.

I suspect that  teachers will find that if they work consistently through the year toward a few of the standards they feel comfortable working with their students to achieve,  do their own assessments to show students’ learning, and not change their teaching to align with a poor test, they’ll be successful, too.

Advanced computer skills for Common Core

Educators have been wailing that students may not have the advanced computer skills necessary to show the extent of their learning when tests aligned to Common Core State Standards roll out.  I have spent quite a bit of time poking around the standards in English Language Arts.  I hadn’t seen any I thought required  advanced skills, but what do I know?

Curious about what advanced computer skills might be required, I signed up for a webinar offered by Atomic Learning on the integration of Common Core and technology. The webinar  began with quotes from teachers about the computer skills they feared their students would not have. Among the vague rumblings of fear were a few specifics.

One teacher feared students wouldn’t be able to open a PDF file.

Another was concerned that students would not know how to copy text from one file and paste it into another.

There’s no way to know whether the quotes are representative even of clients of the company, let alone whether they are representative of American teachers.

But it is rather scary to think even a couple American teachers consider opening a file and copying and pasting to be “advanced computer skills.”

Questions for Curiosity Development

Curious kitten
Curiosity energizes the cat.

Questions are at the heart of education. The public in general tends to see public education in terms of teaching students to answer questions. That’s one reason standardized tests have so much public support.

However, when I say questions are at the heart of education, I’m not thinking of test questions.

Nor am I thinking of the questions teachers ask, those discussion questions that typically produce no discussion, no thought, and no learning. (I speak as a teacher who “led discussions.”)

The questions I’m thinking about are the  people  ask when they really want to know. Call them curiosity questions. They are questions that lead people to think, explore, and make connections. Curiosity questions are the marks of a learner.  Finding ways to develop the type of curiosity that produces learning is the job of educators.

The job has two parts. One part is getting people to ask questions. I do that in the context of teaching students to develop research paper topics. Teachers may need to give students a formula for generating questions and force them to use it until students find a use for one of their questions. Once some dumb thing the teacher makes them do actually proves useful, it ceases to be a dumb thing for students.

The second part of the educator’s job is getting learners  to ask useful questions.  That’s a far more difficult task.  It involves precise use of language, particularly if the question is to be presented in written format where the opportunities to clarify and add detail are limited.  However, that’s not all that is involved.

Asking a useful question requires the ability to look at the situation from the perspective of the person you are asking for the information.  In some situations, the questioner might be given aids that spell out what kinds of information to supply in order to get a timely and useful response.  In such cases, the questioner is expected to read the directions.  That implies, of course, that educators must teach students not only how to read directions but also  to read directions.  The how is part is much easier to teach and learn than the habit.

Reading directions is a good first step, but seeing the situation from the perspective of the person who think has the information requires learners to to use their imaginations.  Forget “what would I do if I were in the Hunger Games?” In real life situations, learners need to  be able to put themselves in the place of people who have answers to their questions and then supply the information that person needs in order to answer the question.

The learner has to ask questions such as:

  • What information about the student would I need to explain to that student how to use Edmodo?
  • What information would I need to know to tell someone how to use an Excel spreadsheet?
  • If I were the boss at Big Burger, what information would I need to decide if someone is worth interviewing  to work at Big Burger this summer?
  • What  information about a taxpayer would I need to know to advise someone what federal income tax form to use?
  • If I had to help someone find scholarships available to them, what information would I need to know?

Answering those questions requires the kinds of applied imagination and creativity students will need to use in their everyday lives.  People who can shift perspective to see a situation from another point of view are truly creative thinkers.

In the process of figuring out what kinds of information a person needs to answer a question, the learner often finds out the answer to the question. That, I suspect, is one reason that one sees so few well-written questions in public question forums like Yahoo! Answers: People who did the spade work got the answer without having to ask the question.


 

Photo credit: “Little cute cat photo 3” uploaded by aljabak http://www.sxc.hu/photo/1382905

[Broken link removed 04-02-2014]

Test Scores: Feedback and Security

Test scores are a divisive topic. A vocal component of educators thinks standardized tests are the embodiment of all evil, while an equally vocal component of the public thinks tests are the ultimate answer to the most important questions of life. (I exaggerate, but the positions are almost that far apart.)

I don’t find either position plausible or useful.

This morning I reread a report I wrote in 1988 about a distance learning program for at-risk eight graders. Last week I read two novels from Alexander McCall Smith’s No. 1 Ladies’ Detective Agency series. One idea from the two very different sources is, I think, relevant to the debate over standardized tests: scores—whether on standardized tests, sales reports, or NFL record books—are feedback to the person who gets them.

In the summer program intended to prevent kids from becoming high school dropouts, students reported the “best” parts of the program were math and English. Teachers reported students’ interest was highest for the social studies, science, and careers topics.

It struck me in 1988 that the reason students placed high value on the program components which didn’t particularly interest them was that those components offered students a way to determine how they were doing.  Their answer to the math problem was either right or wrong;  their paragraph either had six complete sentences (no fragments, comma splices or run-together sentences) or it did not. On the other hand, the topics that interested students didn’t offer them a clear way to assess their understanding. The right/wrong distinction functioned as a security blanket for them.

Mma Makutsi, the secretary/assistant detective at the No. 1 Ladies’ Detective Agency in Gaborone, Botswana, also has a security blanket based on test scores.  Mma Makutsi scored 97 percent, the highest score anyone ever earned at Botswana Secretarial College. Mma Makutsi is not good looking, well-off financially, or well-connected socially.  All her hopes of a better future hang on that 97 percent.

If educators want standardized tests to have less clout in the public arena, it seems to me they have to do a lot better job of building alternative feedback methods into the educational process.

Kids need other kinds of feedback (non-test kinds) regularly.

And so do their teachers and school administrators.

Photo credit: Scanning Test uploaded by lm913

Confidence and illusion in education

An excerpt from Daniel Kahneman’s forthcoming book Thinking, Fast and Slow was published today in the New York Times Magazine under the title “Don’t Blink! The Hazards of Confidence.” The article has applications to the current discussion about education.

Daniel Kahneman, emeritus professor of psychology and of public affairs at Princeton University and a winner of the 2002 Noble Prize in Economics, tells about his personal experience evaluating the leadership potential of candidates for army officer training.

The evaluators’ rigorous methods consistently failed to select candidates that the commanders at the training school viewed as officer material. Despite that regular negative feedback, Kahneman and his colleagues continued to hold confidently to a belief in the validity of their predictions.

He says their error in attempting to predict behavior from a short artificial situation is a common fallacy into which people slide when faced with a difficult situation. “We are prone to think that the world is more regular and predictable than it really is,” Kahneman says.

That’s why people who are gung-ho about using tests to predict students’ future behavior in totally different real life situations are willing to believe in the validity of those tests even in the face of evidence to the contrary.

Before the anti-test folks start to crow, they might want to read the final paragraph of the piece. In it Kahneman talks about factors that lead to development of what we might call “gut-feeling expertise”: the ability to accurately intuit a judgment. That ability, Kahneman says, is developed from “prolonged experience with good feedback on mistakes.”

Two factors figure into such experience, he says.  First the environment needs to be regular so the observations are not merely anecdotal. Second is “the professionals’ experience and on the quality and speed with which they discover their mistakes.”

Those two factors suggest reasons the confidently expressed observations of the educator can be as flawed as the scores of the standardized test. Classrooms are not noted for their regularity, and mistakes made by teachers may not show up for years, perhaps decades.

In general, Kahneman says:

you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about. Unfortunately, this advice is difficult to follow: overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion.

What happens in schools?

Class, race and school enjoyment Becoming Adult: How Teenagers Prepare for the World of Work  provides a snapshot of what American teenagers want to be when they grow up and looks in detail at how their workplace skills and attitudes are developed.

Published in 2000, the book is based on a national longitudinal study of American adolescents from 1991 through 1997. Our 2011 economic and technological landscape is quite different from that 20 years ago; however, some of the data the researchers discovered may be worth looking at now in spite of or because of those changes.

In this post, I’ll focus on some of the findings about students’ experiences in and attitudes toward school. I’ll save findings about career aspirations for another day.

The authors

Mihaly Csikszentmihaly is best known for his studies on flow, the mental state in which people are totally engrossed in an activity that is highly challenging but not beyond their capabilities.

Barbara Schneider is currently at Michigan State University. Her research interests focus on how the social contexts of schools and families influence the academic and social well-being of adolescents as they move into adulthood.

The study design

Led by a multidisciplinary team, the study looked at adolescents in grades, 6, 8, 10 and 12. They followed more than 1,000 students in 13 school districts representing a cross section of American communities and schools.

Researchers used a variety of methods to get data, including:

  • A survey to collect data about what students know about the world of work and factors that may contribute to that knowledge,
  • Daily sampling to determine how students spent their time and how they felt about what they were experiencing
  • Interviews with teens, their parents, and school guidance counselors;
  • Analysis of publications of the schools the teens attended, such as mission statements, budgets, and curriculum descriptions.

Findings

1. How students’ school day is spent

Researchers found that only two-thirds of the students’ school day—four hours of a six-hour day—was spent in classes.

  • Students were in core academic classes (math, science, English, foreign language, history, social studies) just over half (55%) of the school day.
  • Students spent about 12% of their school day in classes outside the core subjects, such as art, physical education, and vocational training.
  • Students spent a third of their school day in unstructured time on school grounds outside of class—the halls, lunchroom, gym, library—or outside the building.

The authors point out that the amount of unstructured time is very high compared to many other countries. In Japan, for example, spent almost the entire school day in class, even eating at their desks.

2. How students’ class time is spent

Researchers also examined what students did during the four hours they spent in classes. Their findings are summarized on this pie chart, patterned after Figure 7.1 in Becoming Adult:

Here’s the actual breakdown of class activities by percentages:

23% of class time listening to the teacher lecture
23% of class time during individual work
14% of class time taking tests or quizzes
11% doing homework or studying
9% watching TV or video
6% listening or taking notes
5% in discussion
4% talking to friends or the teacher
3% in group work or lab
2% in other activity

3. How students feel about school activities

What is more interesting than the numbers is the way students felt about various activities. It should be no surprise that students said lectures are neither enjoyable or challenging. However, it may be surprising to learn they regarded classroom lectures and video content as unimportant to their career goals.

School activities that students said were engrossing, challenging, and important to their future goals were:

  • Taking tests and quizzes.
  • Doing individual  work.
  • Doing group work.

Group work came in well behind the other two activities, however.

The only parts of the school academic program that matched tests and quizzes in their ability to engage students were the non-core classes like art, music, and computer classes. Although kids said they really enjoyed those activities, they also said they were not important to their future goals.

Musings: If things haven’t changed much

If student attitudes haven’t changed since this study was done—and that’s a big if—we might need to rethink:

  • whether tests are really worthless,
  • whether homework is bad for kids,
  • whether the flipped classroom with video content at home is a cure for education’s ills,
  • whether group work increases learning that students perceive as valuable to their careers,
  • whether we can make students’ perception of the career value of non-core classes more positive.

Testing, choice and Diane Ravitch

As her office was being repainted in 2007, Diane Ravitch, who had written extensively on American education for roughly 40 years, sorted papers and thought about why she was feeling increasingly pessimistic about America’s educational system. She realized that theories she had championed were failing dismally in practice.

Ravitch decided to find out what had gone wrong. The result of that exploration is The Death and Life of the Great American School System: How Testing and Choice are Undermining Education (New York: Basic Books, 2010).

In this book,  Ravitch writes about the American public education system as an inside-outsider.  She’s an insider by virtue of having been an assistant secretary of education in the George H. W. Bush administration and a Clinton appointee to the board that oversees federal testing. Yet she’s an outsider because she’s never been employed in K-12 education.

Part autobiography, part history, and part sales pitch, Ravitch’s book combines the virtues and flaws of all three genres.

What Ravitch tells is a story of a system in which politicians and a public bought snake oil solutions to  reverse the “rising tide of mediocrity” detailed in the 1983 report A Nation at Risk. They were unwilling to do the tough work of developing a national curriculum that spelled out what every child should be learning at every grade level.

Ravitch is at her best when writing in the third person. There her admitted passion for public education is restrained by her historian’s training. She writes lucidly, connecting the dots, giving a feel for the people as well as for the cultural context of events. Her prose is a pleasure to read.

When she brings in her personal experience, however, issues get muddy.

I understand why Ravitch feels she needs to include autobiographical material in view of her recent conversion to anti-testing and choice position—and she’s undoubtedly correct in that feeling—but the participant-observer material  puts my critical senses on high alert.

The most astonishing and disturbing material in the Ravitch’s book, to my mind, is her revelation of the extent to which scholars fail to see facts that people outside education take for granted.

For example, as Ravitch explores how the national curriculum movement that began as a result of the Nation at Risk report became derailed when Lynne V. Cheney attacked the history standards for political bias, she says:

Unfortunately, the historians . . . who supervised the writing of the history standards did not anticipate that their political views and their commitment to teaching social history through the lens of race, class, and gender would encounter resistance outside the confines of academe.  (p. 17)

Later as she discusses the foundations that are pouring money into public education, Ravitch says of the Walton Family Foundation, established by Walmart founder Sam Walton, “It simply doesn’t make sense that a family worth billions is looking for new ways to make money.”

In both these observations, Ravitch fails to see patterns of behavior that non-historians would find perfectly predictable: unpopular ideas encounter resistance; people who do something successfully tend to continue doing it.

Ravitch’s focus is on major metropolitan school districts, particularly New York City and San Diego. The big city focus is, in many respects, entirely understandable. School leaders in cities set the tone, and often the curriculum, for the nation.  We in rural America are jerked around by the policies and procedures established by metropolitan America.

That  said, however, I would have liked to see Ravitch acknowledge that the upheaval in education nationwide plays out somewhat differently in rural areas.

In rural areas, teaching may be a high-paying job compared to others available. If the poor pay of teachers in cities poses one challenge for public education, the relatively good pay of teachers in rural areas poses a different challenge.

For example, where I live in Chenango County, NY,  the median family income was $42,257 in 2008, according to the US Census Bureau, and 14.2% of the population was below the poverty level.  The average teacher’s salary in Bainbridge-Guilford School District in which I live is $54,516. In this type of pay disparity, choice and testing certainly have an impact on public schools, but the impact is expressed differently than in cities.

Also, instead of charter schools as visible reminders of dissatisfaction with public education, in rural America homeschools are a largely invisible expression of dissatisfaction. More significantly, foundations are not rushing into rural areas with money to fill in the gaps for poor students.

I don’t know whether The Death and Life of the Great American School System will change anyone’s mind about choice and testing. My guess is that Ravitch is going to be pretty lonesome for some time to come. That is not a criticism of the author or the book. It’s simply an acknowledgement  that Diane Ravitch is one of very few people willing to say publicly, “I was wrong.”