Showing posts with label assessments. Show all posts
Showing posts with label assessments. Show all posts

Thursday, July 10, 2014

Summer news on Michigan schools

A little bit of posting links so I can find them again, and a little bit of analysis. 

First, new rules on teacher certification.  The last new rules made it easier to get re-certified: just don't skip the district-provided professional development you already have to go to, and you can re-up your professional certification every 5 years.  I don't remember what the rules for renewing a provisional certificate, or moving from a provisional to a professional certificate, were, 'cos last year I was too freaked out about getting my professional cert renewed.  (Turns out I needn't have worried.)

It looks like the new new rules, effective as of a week ago, make the process of using in-school PD more consistent with using college credits or SCECHs (state-approved, non-district provided PD). 

MEA's flow chart on new certification rules

Next, the state-wide summative assessment.  Until last year this was called the MEAP, and it was the sort of fill-in-the-bubble test which it turns out really only tested reading, no matter what subject name was at the top, but that was okay, because that's pretty much what the ACT does.  (We all know that the ACT is the last word in school efficacy.)  For the last 3 years or so, we have been moving towards the Smarter Balanced assessment, which is computer-based, still kind of multiple choice, and a whole lot harder.  (I for one think this is a good thing, but it has its skeptics, and they include some of the smartest people writing about schools in America.)  Also, unlike the MEAP, the Smarter Balanced assessment was going to take place in the spring, after we'd taught the students what was going to be on it, rather than in October, before they'd learned anything but where to put their backpacks at the beginning of the day.

Over the last year, a number of states have started to backpedal on using the Smarter Balanced assessment and the Common Core standards to which the assessment is tied.  This is a problem for two big reasons.  1.)  The reasons for slowing down have nothing do to with effective education, assessment, or accountability, and heaven knows this plan has plenty of those problems that need addressing.  They seem to be nearly entirely political, which is interesting because the reasons for changing from MEAP to Smarter Balanced also had nothing to do with education and were almost entirely political.  2.) Schools have spent the last 3 years getting ready to implement the Smarter Balanced assessment, and 5 years implementing the Common Core standards.  To change now, the year before everything is supposed to come to fruition, would make for an enormous waste of time and money, neither of which schools have in abundance to begin with.  

Michigan is one of the states backpedaling on the Smarter Balanced assessment.  The state government passed a school aid budget that did not include funding for the Smarter Balanced assessment, but did include money for the MEAP assessment.  This, predictably, caused a freak-out.  If you worked in an auto manufacturing plant and were told that the whole plant was going to be re-fitted, so that instead of making Ford Focuses you would now be making Hummers, and then come to find out you're still making Focuses after all, that's the kind of whiplash change we're talking about, only the re-fitting has taken 5 years and you still had to build Focuses the whole time. 

Today the State Superintendent's office issued a clarification memo, of sorts: It won't be the MEAP after all, we're just calling it the MEAP because that's what we used to call it.  It won't be Smarter Balanced, either, because there's no money for that.  We're going to make a new test, just for school year 2014-15, which we promise will be much better than the MEAP, and we'll get back to you on what we're going to do for SY 2015-16 later.  No word on who's going to design the test--my guess is they're going to take questions from old MEAP tests.  No word on who's going to publish the test--my guess is Pearson.  No word on if the test will undergo anything like the months of prep work and field testing across 45 states that Smarter Balanced has already gone through--my guess is no.

MDE's clarification on student assessment memo

Saturday, November 14, 2009

Meta-assessment

The other day I wrote a quick blog post from school, making noise about assessments and a new format and the like.  As I was finally assembling the tests, I was having second thoughts.  It wasn't a grammar-and-vocabulary test, which is what I think of by force of habit as a test.  This assessment was only communicative skills--listening, speaking, reading, and writing.  I keep telling students that I don't care if they memorize vocabulary lists or whether they can conjugate verbs or not, that what matters to me is their ability to communicate.  In the first year class, the emphasis is strongly on the comprehension skills, with production being fairly limited in length and scope.  This is in line with language acquisition theory, which says that comprehension will develop before production, and will always occur at a higher level.  It's also in accordance with the ACTFL national standards and the corresponding levels of performance for Novice-Mid to High.

But my tests haven't really reflected that--until this year, the tests have always been (1) listening comprehension (2) reading comprehension with a cultural trivia component (3) grammar and vocabulary sections.  Largely, this is because I took as much of the text-book-provided tests as my students could reasonably do in a day, copied them off and stapled them together.  But I've started redesigning my unit plans the way they're supposed to be designed (see here and here)--which means my assessments needed to be re-written to match learning goals.  And if I don't care if they can memorize vocabulary words, I shouldn't test them on memorizing vocabulary words.

I've hesitated to do this, for three reasons.   1.  It marks a dramatic departure from what I think of as a Spanish test, and I had a hard time wrapping my head around it.  2.  I was worried that a sampling of communicative tasks would overestimate students' abilities to use the language comprehensively.  3.  I wasn't really sure that I was good enough at communicative assessments and using rubrics to assess communicative ability to create a reliable assessment.

I've learned to live with (1) in other contexts--the job of professional educator is not at all what I thought it was.  It's a great deal deeper, more exciting, and science-based than I expected it to be.  (If I'd known what my job was actually going to be, I would have taken a lot of laboratory science and social science classes, and not, for example, Astronomy, Ocean Systems, or Health and Well-Being.)  So, I'm just kind of getting used to the idea that almost everything I thought I'd be doing is the wrong thing, on some level, to do.  (Still trying not to throw out the baby with the bath water...)

Having run a couple of tests, I can now address (2) fairly accurately.  When the learners are participating in good faith in the assessment, their communicative performance gives at least as accurate picture of their language skills as the previous test formats.  And, of course, it has the added advantage of, y'know, actually testing what I want them to know.  So we can pretty well set (2) to bed.

Concern (3) remains a concern.  I'm not sure that this first communicative-based exam I gave really gets to the heart of the matter.  But neither did what I was doing before, and change and learning have to start somewhere.  I've changed, and now I'm learning.  A greater sampling of communicative tasks is probably in order, although the format is about right.  My primary concern is with the speaking/listening section.  The first time I tried this, I asked the students a question, and graded them on how well they answered it.  They then had to ask me a question, and write down the answers.  It was the first chapter test of a high school Spanish I class, so an open-ended question probably would have been too much.  But that section as it stood was much more reactive than creative, I think.

The other part of that is the rubrics I'm using.  Given the nature of a limited testing scenario, I don't think I can break the comprehension skills up the way I did.  So, I'll be looking for another way of doing it, so that the comprehension grade really tells the learners what they need to know in order to improve.