As a science professor, I have to deal with these unprepared kids every semester. Lemme tell you, they are really unprepared. And there's no excuse for it. Math and science are not that difficult if they are taught well and if teachers enforce expectations.
At least 75%? I'd want 100% of High School graduates to be able to pass the tests. I'd expect a decent chunk to struggle, but if they graduated High Schoole they should be able to get through at least the first year of University. I agree that not everyone should got to Uni, but the first year really isn't that hard, if you've got a High School diploma that should be evidence that you have the accademic skill to go to uni. (Though certainly not necessarily that you have the capability to find Uni easy, just theoretically possible)
"We need to stop this idea that everyone needs to go to college."
When the students don't learn the basic skills they need in high school, they have to go somewhere to get them!
<Conspiracy Mode On>It must be the colleges destroying the public school system, so that more kids will need to go to college.<Silly Conspiracy Mode Off> :-P
Perhaps it is true that everyone does not need to go to college but everyone does learn differently and the ACTs are an espeically poor example of measuring knowledge. There is so much bias on that test (and I;m not simply talking racial bias) that it's very difficult to gauge if students really are that lacking in these concepts. The reverse is true as well. I had a student (now in her third year at Notre Dame) who got a perfect score on her ACTs but could not explain to me and one of my colleagues the most fundamental aspects of Einstein's theory of relativity. She knew how to take the test and had answers memorized but had no enduring understanding.
One recent study concluded high school students in 23 countries were outperforming U.S. students in math. Students in 16 countries were outperforming U.S. students in science. And nine countries did better in literacy.
Remember that many of these countries (China for example) are measured by the students that are ALLOWED to be in school not all people of that age. Only the best and brightest are generally allowed to participate in these assessments as opposed to everyone who is measured here.So their pool of participants is restricted and heavily biased.
Here's some more information which is useful and very true.
You are correct that direct comparisons between the U.S. and other countries is misleading. In Europe, for example, only the top-notch college-bound students even go to high school. The dumbest/least-motivated quit after 9th grade and the rest go on to vocational schools.
Would you please give specifics on how the ACT is biased? Particularly, how does this bias manifest in terms of math and science?
Also, I've never taken the ACT, but my understanding is that you can't memorize the science portion because it's a reasoning test -- i.e. it tests your ability to read graphs, infer, spot trends, etc. -- not a test of rote material. In fact, one of the test help sites says "no specific background knowledge is required" for this section. Memorization would not have helped your student.
That statement is misleading.
There's no such thing as "high school" or "college" in most of Europe.
Our educational system is structured quite differently, and is just as broken (and of course there are major differences between countries too).
Wait... Can you explain how a MATH test can be biased? Math is a HARD SCIENCE. 1 + 1 will always equal 2, and that stability doesn't change as you go up through higher math. Once you get up to some of the more theoretic math maybe, I don't know, I have a math disability and was surprised to pass even college algebra, but that level of math isn't on the ACT anyway. Biased my ass.
I stopped at the local BBQ tonight (we needed a fix, y'see). The two Sweet Young Things were yammering with each other as they rang up my sale.
One allowed as how she was allergic to lots of things. She explained, "I just say I'm allergic to things I don't like; cottage cheese, nuts, school ..."
Sarah, here's a link that you might be useful and answers part of your questions. The scores suggest a gender bias and this site offers an explanation.
The first paragraph alone details a setting that would be completely foreign to many test takers and I'm not discussing race here at all. Lines 23-25 contain imagery that has no relation whatsoever to many students lives. In fact, the entire context of this passage is so out of touch with the youth of today, how could there be any serious analysis done?
Of course, this doesn't even begin to get into the shallow bias of analyzing a work of fiction with a #^%#^%# multiple choice question. Ridiculous!
Not too bad but question #7, Set 1 asks about mines. If you don't know what a mine is, does that make it more difficult to answer the question? Question #11, Set 1 is very good, though. This would be a type of question that would pass on an enduring understanding. Question #11 in Set 2 is good as well....what is it with the number 11? 8-)
Passage 3...what are peony seeds? I honestly don't know....I don't garden...do they have a special property that other seeds don't? Look at the date on this question...1985....really?
Something else to consider that was brought up in the first link I put up....40 questions in 35 minutes. Are you kidding me? So, it's now a race to be right first? How does time factor in to a science test? It was my understanding that science was more about being deliberate....thoughtful...patient...testing re-testing....careful analysis...how can you do that when you are rushed? How is that an accurate measure of knowledge?
Finally, I would have to say that there is an overall bias when you measure knowlege based on multiple choice tests. Certainly, an individual with logical-mathmatical intelligence would excel. But what about someone with kinesthetic intelligence? Remember, Gardner first came up with this theory while watching a basketball game and reasoning that the "dumb jocks" were actually quite knowledgeable in the areas of geometry and statistics. He saw them learning, with their bodies, the areas on the court where they could be most effective and score the most points.
In regards to my old student, I was speaking primarily of the math portion regarding memorization. In other sections, it wasn't so much memorization but she had no real enduring understandings. It was simply that she knew how to take the test and reason within its narrow minded structure. Reasoning outside of it was more difficult for her and we were fascinated by how little she learned or knew based on her score.
Mark, most of the basic stuff in math requires only memorization.
Yes, you're right that doing well on a college entrance exam is no guarantee that the student is intellectually prepared, but bombing the exam is virtually a guarantee that the student is not prepared.
Nonetheless, it appears that what you're saying is that while the ability to pass the test is no assurance that the test taker has the necessary background knowledge, the inability to take the test is still a solid indicator that one does not have the requisite knowledge. Or if you like, while it may not be as useful as you'd wish for who "makes the cut", it's still dead on the money in terms of who doesn't.
Which, to me, still seems pretty useful for weeding out those who shouldn't be going to college.
Not too bad but question #7, Set 1 asks about mines. If you don't know what a mine is, does that make it more difficult to answer the question?
I suggest that if you've gone through twelve years of primary education and don't know what a MINE IS, then your primary education system has failed you utterly.
Oh, and you're not qualified to go to college. So the test question is valid. This is true of your other examples as well.
I'm not seeing Mark's comment except when I click on the details of his profile. Weird.
I second Kevin's comment, but even if you don't know what mines or peony seeds are, why should that inhibit you from answering the questions? I have seen tests given to students with deliberate nonsense words in them ("If you have two flibbles and one is faster than the other ... ") to see if students know how to reason with no context whatsoever. And, by the way, a peony is a flower. See? The test question is gender-biased in my favor.
As for gender biases on exams, it doesn't seem to hurt acceptance and matriculation rates for women. The university I teach at admitted 66% female vs. 34% male students this year.
The first paragraph alone details a setting that would be completely foreign to many test takers and I'm not discussing race here at all.
I have no idea what you're talking about - that's a link to a question that changes daily. Today it's talking about ions.
But for the rest of the ones you pointed to, as Sarah said, so?
I don't need to know what a mine is for the question asked.
Nor what a peony seed is.
Those don't matter, what matters is can you do the math. The description is irrelevant. In the first, something is filled with water, based on an analogous example, what will you need to clear it? In the second, something happened, based on changes - what happened?
Passage 3...what are peony seeds? I honestly don't know....I don't garden...do they have a special property that other seeds don't? Look at the date on this question...1985....really?
Has math changed since 1985?
do they have a special property that other seeds don't?
Doesn't matter. If you know how to do the math, then you'd know that. (That would only be indicated by the same experiment with a different seed, and comparing that data. You're missing the forest by running into trees. You're looking for reasons that the questions would be "biased", but you don't understand the math enough to know that it doesn't matter.
How does time factor in to a science test? It was my understanding that science was more about being deliberate....thoughtful...patient...testing re-testing....careful analysis...how can you do that when you are rushed? How is that an accurate measure of knowledge?
And there, you're trying to do it again.
Science is about consideration, deliberation. But it's not about being slow on purpose. Either you know the material, or you do not. In my experience, I finished those tests in 1/2 he time allotted or less. If you don't know it, then you deliberate, stall, guess, try and find meanings, etc.
But you're complaining about standardized testings being biased, because there's a limit. That's the point of a "standard". Can you answer these questions in the time allotted, and how many can you get correct? Then we compare that versus others.
But what about someone with kinesthetic intelligence?
Heck, why not say that football players are actually experts at land management?
If you're playing 21, that matters. If you're trying to test understanding of math and related subjects, it fails miserably.
I can catch a fly ball on the baseball field. That's a 2nd derivative, and includes an accurate estimation of velocity without true measurement. Doesn't mean I understand the concepts of derivation, or how to calculate it. (And in fact, I had a hellofa time playing outfield in softball - because of all my practice with baseballs, the size of the ball tricked my kinesthetic-practiced mind.)
As I told you, the standardized tests shows you quite accurately some things. How well you can play against Michael Jordan isn't one of them. In college, I roomed with 3 members of the basketball team for a semester, and hung out with the team often. 1 of whom went on to be a NBA superstar. Trust me. He didn't know math, calculus, geometry, or land management. He was a really nice guy, and he hustled his ass off. But he didn't know math, despite his learned skill.
"How does time factor in to a science test? It was my understanding that science was more about being deliberate....thoughtful...patient...testing re-testing....careful analysis...how can you do that when you are rushed? How is that an accurate measure of knowledge? "
You aren't doing science, you are answering questions about science. And doing science today means following the scientific method. Remember that?
1) Ask a Question
2) Do Background Research
3) Construct a Hypothesis
4) Test Your Hypothesis by Doing an Experiment
5) Analyze Your Data and Draw a Conclusion
6) Communicate Your Results
The time required is dependent on what you are doing, the test questions are aimed at the last 2 steps, so it shouldn't require a lot of time, hence the 40 questions in 35 minutes.
For grins I worked through the sample from the link provided. I'm amazed the peony seeds tripped you up. The subject matter was immaterial to the data and questions.
As to being rushed, welcome to the real world. Deadlines are a part of it.
Thankfully, many college admissions no longer require ACT or SAT scores although scholarship programs still do.
That's news to me. (And to my school.)
But the SAT is designed with a purpose in mind. To predict future scholarship. And it does a not-bad job of that, really. No, it doesn't totally show everything about a student. But it used to do a pretty good job. Which is why it keeps getting the ire, and redesigned, to be less "biased".
But you're not understanding what "bias" means. Accurately predicting is called biased when it shows minorities performing differently (and worse) than "whites" (and asians, for some reason, are treated badly here, just like Jews.) It must be wrong!
50% of our entering black males will fail or drop out before the end of their 3rd year. The simple fact that the SAT could have predicted this is overlooked, amid cries of racism, and it's obviously a problem - not that the black males are coming unprepared, or they've been promoted beyond their proven knowledge.
This, by the way, is not the case with our black females, who graduate at upwards of 85%. (it's hard to track this, because a lot of students come here to get 2 years cheap, and then transfer to more prestigious schools.)
But if you look at the SAT scores - you could, if you presume SAT is a decent measure of future performance - guess that. The black male SAT scores are considerably lower than the black female average.
Not liking what the standardized tests tells you is a wholly different matter than the standardized tests being wrong.
SATs and ACTs are an excellent way to weed out the most defficient students. It may not be a precise predictor of success, but it's a pretty good predictor of who is unprepared.
I guess I can see many of your points here. I think, though, that if you put yourself in the mind of the student and then ask yourself...are these tests valid and are they reliable? By pointing out the the words and the context used in each question, I am simply stating that I don't think that it is an accurate measurement of what we are looking for as far as student's abilities. It may be reliable but I don't think it is valid.
Something else to consider...one of my favorite lines..."American education is a mile wide and an inch deep." Many school districts require their teachers to cover far more material than is necessary. As a result, students have a very rudimentary if not outright shallow understanding of basic concepts. This would be another reason why many countries in East Asia out perform US students. They cover topics in a very in depth fashion and, as a result, reason better in the science section of the ACT, for example.
"This would be another reason why many countries in East Asia out perform US students."
WaitAMinute! Weren't you the person who pointed out the article describing the different standards of measurement in different countries? That seems to contradict what you just claimed.
It's actually both. Only certain students are allowed to be tested...the higher caliber ones...and those that are tested are taught with a depth that is needed here. We can adhere to standards all we want but if those standards call for too many topics to be covered, will there ever be any enduring understandings? I think not.
DJ and Sarah-they are meaningless if they aren't accurate measures of learning. Certain people take these tests quite well. That doesn't necessarily mean they KNOW the answers. And people that do poorly on the tests may know the material and have a full grasp of it but have test anxiety and so they do poorly. I'm not saying we necessarily need to change the way we test to accomodate these students but I am saying that these test scores are not a valid representation of the knowledge of our youth. Thus, they are not to be looked at as "gospel."
I think, though, that if you put yourself in the mind of the student and then ask yourself...are these tests valid and are they reliable?
"In the mind of the student?" The student is the one being tested. Pretty much by definition they're not the ones who choose how to prove mastery of the material!
The question isn't one to ask the student - the question is one to be settled by the evidence.
By pointing out the the words and the context used in each question, I am simply stating that I don't think that it is an accurate measurement of what we are looking for as far as student's abilities.
And you're simply ignoring that the words and "context" were irrelevant.
Mark, If I hand you 5 shoobles, and you have 3 shoobles already, how many shoobles do you have?
Do you need to know what a shooble *is* to do that math? No. The label is irrelevant.
Same for peony seeds and mine definitions. Now, if those questions asked something that would require that to be known, for instance, in a question instead of asking about "germination" which is in the table, saying instead "sprouted" - that would be potentially valid. That requires one to know about seeds and their growth cycle. (Even so, that's still weak - but that's the sort of context that really could matter.
But they didn't. They asked the student to look at a table, and do some very basic amount of reading of the table, and what it meant.
There's no bias in those questions, because if you know the material, if you know how to perform the requested action, the labels are irrelevant. Utterly.
It may be reliable but I don't think it is valid.
Yes, we know. Believe me, we know. That's why we're showing you why it is. The labels are utterly irrelevant, you're becoming distracted on them. They - in those questions - don't matter.
They're very valid - it's a simple ratio question for one, and reading a table for another. It's not a problem if you understand that. It's only a "problem" if you're trying to read into it something that's not there.
See here, you've got a case where you made a judgement, and the evidence you're using to back it up doesn't prove your point. (There might be some that would.) But you are still saying "Never mind the facts, I made my mind up." In the spirit of your new tone, look at that, and accept, at least for the sake of argument, that they are valid. What does that mean?
"It may be reliable but I don't think it is valid."
What in the world does that even mean?
Does the ACT test for high school level comprehension of English, mathematics, reading, and science? Does it do so consistently from student to student? If so, but you are still not happy with it, is there a different set of English, mathematics, reading, and science you'd like tested? Ones not at high school level?
Or do you just not like the fact that a standardized test is showing what Kevin et al. keep saying, education is broken in the US?
"Many school districts require their teachers to cover far more material than is necessary."
Pretty much by definition they're not the ones who choose how to prove mastery of the material!
I disagree. I am a proponent of learner centered instruction. I have had students write their own assessments on many occasions. I help them write the rubrics that gauge these assessments. Believe it or not, these assessments are usually ridiculous in difficulty. Given the chance, students are much harder on themselves than instructors would ever be.
The labels are utterly irrelevant, you're becoming distracted on them.
No, they are relevant but not the end all and be all of everything. They are one of many problems that test takers encounter every day. I would suggest spending some time talking to the students at your university and ask them what they thought about the validity of their tests. Also, spend a half hour or so reading through the fair test link above. They have a ton of examples of various bias in assessments.
Actually, what we are really talking about here is the value of formative assessment versus the value of summative assessment.
I read your 'Gender Bias' link. Tell me you don't seriously see any flaws in the case they make. It talks about lots of reasons males and females score differently on these tests, but only offers excuses that blame the supposed bias in the tests without presenting any other reasons outside of the test being the fault.
The basic premise is flawed in that they try to compare actual class grades in relation to test scores. As long as teachers and professors make partial class grade based upon things like attendance, preparation, classroom interaction and assignment completion they are grading on things other than actual knowledge of the material. Not to say that some of those items are not important to instill and proper items of instruction. But a test of material knowledge is a test of the knowledge, not whether you did your homework last night.
The site you reference did bring up the hard-wired functional differences in gender, but failed to present how those differences could impact the sites basic premise. Males (especially young males) are less likely be prepared for class, less likely to do their homework, more likely to interact poorly or even disrupt class, more likely to have poor class attendance.
Rather than assuming that the test has a gender bias because females have better class grades but worse test scores, perhaps you might want to think that the gender bias starts in the class because those scores don't accurately reflect the higher test scores shown by males.
As an anecdote, I had essentially the same SAT score as a girl in high school, yet she brought home straight A's while I had a 3 GPA. She was a valedictorian, I wasn't. By the theory of “fairtest”, because she showed 'better' prior performance she should have scored better than I on the SAT. By my theory, based upon observation, she had better “prior performance” because she didn't skip school to go to work, she turned in her homework every night and she did every bit of extra credit offered. We scored the same on class final exams and our knowledge of the material was essentially equal. The fact that we had a similar grasp of the knowledge was 'proven' by the test scores, the fact that she had better grades was based upon what I am now going to call (just for you) “activity based gender bias”. That is, scores that are biased based upon activity that one gender is more likely to perform.
A valid task should: reflect actual knowledge or performance, not test-taking skill and memorized algorithms; engage and motivate students to perform to the best of their ability; be consistent with current educational theory and practice; be reviewed by experts to judge content quality and authenticity.
"If a task is set in the context of football and students who have a knowledge of football have an advantage on the task, that knowledge is an extraneous factor. The context becomes a biasing factor if particular groups of students know less about football than other groups of students. For example, in this society few girls have experience playing football. If boys, in general, have experience with the game and more knowledge of its structure and rules, then the task could be biased in favor of boys."
For a task to be fair, its content, context, and performance expectations should: reflect knowledge, values, and experiences that are equally familiar and appropriate to all students; tap knowledge and skills that all students have had adequate time to acquire; be as free as possible of cultural, ethnic, and gender stereotypes.
and
Validity is defined as "an indication of how well an assessment actually measures what it is supposed to measure." The chapter identifies three aspects of an assessment that must be evaluated for validity: tasks, extraneous interference, and consequences.
Reliability is defined as "an indication of the consistency of scores across evaluators or over time." An assessment is considered reliable when the same results occur regardless of when the assessment occurs or who does the scoring. There should be compelling evidence to show that results are consistent across raters and across scoring occasions.
GuardDuck-no doubt people use the test as an excuse from time to time. And it's near impossible to achieve some of these goals listed above. There will always be bias in tests. The goal is to diminish that bias and make the questions as neutral as possible.
I just saw this comment--
It's news to me, too, Unix.
http://www.fairtest.org/university/optional
SATs and ACTs are an excellent way to weed out the most defficient students. It may not be a precise predictor of success, but it's a pretty good predictor of who is unprepared.
See my comments above regarding formative and summative assessment. I'd be interested to see where you fall between the two.
Pretty much by definition they're not the ones who choose how to prove mastery of the material! I disagree.
And you've demonstrated that you don't know how to evaluate knowledge. How to weigh evidence. I'm all for respecting your change in tone, but we've got 3 years - that's continuing now - with proof that you don't understand concepts.
I am a proponent of learner centered instruction.
OK.
I have had students write their own assessments on many occasions.
The students should never be writing their own assessments. Now, there's a (pretty good) possibility that you're misusing the word "assessment". Assessment = grading. Judgement. Determination on what they've done, and where it compares to the expectations.
By the very definition of the concept, they can't assess themselves, if they can assess themselves, then they should be teaching the classes. That's not to say the teacher has to be the ultimate expert - but they should be more of an expert than the students.
If the student's don't know something, how can they grade themselves as deficient? By definition, they don't know what they don't know!
Yes, this has a lot of similarity with your past - including a refusal and inability to re-access when deficiencies are demonstrated.
I help them write the rubrics that gauge these assessments. Believe it or not, these assessments are usually ridiculous in difficulty. Given the chance, students are much harder on themselves than instructors would ever be.
It's one thing for the students to assist with setting goals. Assessment is judging how well they MET THOSE GOALS.
The labels are utterly irrelevant, you're becoming distracted on them.
No, they are relevant but not the end all and be all of everything.
No, they're irrelevant. That's why I demonstrated that. You declare something, but don't back it up. Why does the knowledge of what a mine is or what a peony seed is change the result? I don't know what a peony seed is and I can answer the questions. The questions aren't about mines, or seeds, it's about knowing how to do math, and read tables. The labels do not matter in that regard. Irrelevant.
They are one of many problems that test takers encounter every day.
They are totally irrelevant to knowing HOW TO ANSWER THE QUESTIONS
Actually, what we are really talking about here is the value of formative assessment versus the value of summative assessment.
I tend to fall more on the formative side and it seems, based on your comments, that you fall more on the summative side. Is that true?
Not really. You don't understand, which makes it really hard to explain it to you. If you're still insisting that the label of "mine" or "peony seed" shows bias and prejudices the test, then you don't understand. Period.
A valid task should: reflect actual knowledge or performance, not test-taking skill and memorized algorithms;
At some point, you have to test students on their memorized algorithms. No, not all the time, but it's damn important to remember some of them and have the memorized.
But even so, the examples you gave above did not have that problem. The equasion and information was given.
engage and motivate students to perform to the best of their ability; be consistent with current educational theory and practice; be reviewed by experts to judge content quality and authenticity.
The first bolded part is self-referential. the second, note it doesn't specify knowledge or completion of work, correctly.
"If a task is set in the context of football and students who have a knowledge of football have an advantage on the task, that knowledge is an extraneous factor. The context becomes a biasing factor if particular groups of students know less about football than other groups of students. For example, in this society few girls have experience playing football. If boys, in general, have experience with the game and more knowledge of its structure and rules, then the task could be biased in favor of boys."
Which would only be true if the task required specific football knowledge. If you didn't tell in the question, that a safety is 2 points, that might be valid. (But not really, most women know football rules just fine, that's just anti-female bigotry)
But using an example of football where you give all the required information does not have that problem.
Just as above, the knowledge of what a mine is is irrelevant to the task demanded - the information needed was given. Same for peony seeds. The information to answer the questions was all given. If you know how to read a chart.
I would suggest spending some time talking to the students at your university and ask them what they thought about the validity of their tests
Mark:
I can't ignore the students. I talk to them all the time. And they complain, quite often, of the things I complained about when I was their age.
Until I discovered that, by gum, yes, that stupid shit is important. And I spend a lot of my time telling the students my war stories and why that learning that piddly, picky shit will make their lives FAR EASIER in the future.
Just like people told me that when I was their age. And when I listened, I'm not very thankful. And when I didn't, I say "you know.. if I knew then what I know now..."
We really don't need callow youth setting any sort of academic standards. I've found living out in the real world were stuff needs to get done ASAP is a bracing tonic to ward off some of youth's worst excesses.
If a task is set in the context of football and students who have a knowledge of football have an advantage on the task, that knowledge is an extraneous factor. The context becomes a biasing factor if particular groups of students know less about football than other groups of students. For example, in this society few girls have experience playing football. If boys, in general, have experience with the game and more knowledge of its structure and rules, then the task could be biased in favor of boys."
As U-J has noted above, often (usually?) the extraneous details such as "in the context of football" are completely irrelevant. Wht the paragraph you posted above suggests to me is that students aren't being taught how to recognize which details are and are not germane to the problem at hand.
And when it comes right down to it, it can be argued that problems where you have no understanding of contextual details are a vital part of the teaching process. They teach the student how to make the same tools perform the same functions, even in a completely unfamiliar context.
After all, there's not much point educating someone to be competent only in areas they already thoroughly understand, is there? Such would tend to refine existing knowledge and techniques and completely fail to produce new innovations, would it not?
"Wht the paragraph you posted above suggests to me is that students aren't being taught how to recognize which details are and are not germane to the problem at hand."
and
"And when it comes right down to it, it can be argued that problems where you have no understanding of contextual details are a vital part of the teaching process. They teach the student how to make the same tools perform the same functions, even in a completely unfamiliar context.
After all, there's not much point educating someone to be competent only in areas they already thoroughly understand, is there? Such would tend to refine existing knowledge and techniques and completely fail to produce new innovations, would it not?"
BRAVO!
In real life, thinking is involved, the answer is not in the back of the book, and there is no one standing by to pat you on the ego and instantly tell you whether or not you got the correct answer. TEACHING students is about preparing students for that, i.e. for the living in the real world as adults. TESTING students, particularly at the end of twelve years of teaching and learning, is about determining whether or not the teaching and learning were successful.
And you've demonstrated that you don't know how to evaluate knowledge. How to weigh evidence. I'm all for respecting your change in tone, but we've got 3 years - that's continuing now - with proof that you don't understand concepts.
That's simply not true. I'm very skilled at evaluating knowledge. I am expressing the opinion of myself and the students that I have worked with over the years who take these tests. This opinon is based on experience and outcome. I'm not really sure how to respond to the notion that I don't understand concepts.
The students should never be writing their own assessments. Now, there's a (pretty good) possibility that you're misusing the word "assessment". Assessment = grading. Judgement. Determination on what they've done, and where it compares to the expectations.
No, students do not grade their own work but we do have peer review in our class all the time that does figure into whether or not they meet or exceed the standards. But they do help to decide the information on which they would like to be assessed and I decide if it is reflective of the work we have done. As I stated above, more often than not, it leans more to the rigorously difficult and I have to alter it. They are also given rubrics for every major assignment and many times we talk about the method of assessment...mutliple choice, short essay, long essay, or presentation. The latter is chosen quite a bit as they love to work with power point.
If the student's don't know something, how can they grade themselves as deficient? By definition, they don't know what they don't know!
Because the job of any good assessment is to gauge learning and rank it according to the standard. They may know "something" but how in depth do they know it? Do they have an enduring understanding? This would be why rubrics are provided so they know where their knowledge stands prior to the assessment.
It's one thing for the students to assist with setting goals. Assessment is judging how well they MET THOSE GOALS.
Agreed. And that's why we have the rubrics
They are totally irrelevant to knowing HOW TO ANSWER THE QUESTIONS
You're missing the point and I think you need to ask some students at your school about this. They are the ones that are taking the tests now. I'm not certain I can do an adequate job of explaining this problem to you.
Not really. You don't understand, which makes it really hard to explain it to you. If you're still insisting that the label of "mine" or "peony seed" shows bias and prejudices the test, then you don't understand. Period.
Alright, well, what do you think about formative assessments? Summative assessments? "Not really" isn't much of an answer. I knew we'd have some issues here when I brought up the test bias debate. I would urge you to spend a half hour or so sifting throught that fair test site and learn more about context of testing. Bias has a much larger meaning than mines or peony seeds. This are two examples in a sea of many.
but it's damn important to remember some of them and have the memorized.
Long term or short term memory? The former is what one should strive for to produce an enduring understanding. I think with the rest of your comment you're really not putting yourself in the mind of the student who doesn't know about football. And you'd be surprised how many people in general (let alone women) who don't know the rules of football. Again, ask the students at your college. Better yet, ask some at the local public schools. See what they say. Context does matter and your points saying that it doesn't may work for some students but not all of them. Don't you think that it's possible that the student that answers the question wrong may have had the knowledge to do the work but the context made it more difficult?
I'm in a school district of 50,000 children. State average ACT 22.1 - one high school is elebrating an 18.1 as an "improvement." District is now implementing a new grade strategy designed to reward students for not studying or doing homework assignments. Easier to get a "C" without any work, harder for kids to get an "A" for extending themselves. Individualism is dead - we will all regret the equality of misery to come. Why do I stay? There's a few kids who likely will lead the militia of tomorrow.
I'm not a scientist, I make no claim of being a scientist, what I'm posting below may be just a way of showing how far out of my depth I am. All you guys who are scientists, please correct me if I'm wrong.
I think with the rest of your comment you're really not putting yourself in the mind of the student who doesn't know about football. And you'd be surprised how many people in general (let alone women) who don't know the rules of football.
I can see this as a valid point of argument for subjects like history, where at least to some extent, opinionsare part of the data. In mathematics and science, not so much. In fact, for the math section of any assessment, it should be important to know if the student is capable of ignoring non-essential context. Why? Because if they can't demonstrate that, you can't be assured that the vector calculus used by the astonomer, the nuclear physicist and the missile designer are all the same mathematical tools performing the same operations in the same way.
Such details matter when you're planning Mars missions and such, I bet.
"Such details matter when you're planning Mars missions and such, I bet."
And when you're building a bridge, or balancing a checkbook, or making change, or any number of things.
In my unhumble opinion, the worst thing you can do to a student is to always dumb down what you ask him such that he never has to learn how to arrive at an answer.
And when you're building a bridge, or balancing a checkbook, or making change, or any number of things.
Well so far I've never needed vector calculus to balance my checkbook or make change. Perhaps I've managed to be evil enough, white enough and Republican enough, but haven't gotten to be rich enough.
"Don't get me wrong, I suspect I'd really like to be so rich I needed to learn vector calculus to balance my checkbook..."
I know a number of very wealthy people (and worked for some for a very long time), and I do not envy them. Being wealthy is more work than working to try to get wealthy.
I'd rather be secure, stable, and anonymous. Am there, am that.
And you've demonstrated that you don't know how to evaluate knowledge. How to weigh evidence. I'm all for respecting your change in tone, but we've got 3 years - that's continuing now - with proof that you don't understand concepts.
That's simply not true. I'm very skilled at evaluating knowledge.
I'm sorry Mark, but is is true. It's what we've been yelling at you for 3 years now. You are not skilled at evaluating knowledge. We have three years of constant drumbeats of us pointing out to you failures in your methodology, in your collections, in everything, and you still can't understand.
You are not skilled at evaluating knowledge. I'm sorry. This is a perfect example of why you shouldn't judge yourself. Why you can't measure yourself on the scale. It's why I, and many others, have sent you to the study that demonstrates the more incompetent you are, the more confident you are at your self-assessments.
The examples you gave do not depend on the labels. Notice that all of us who know how to do the work said that? It matters now how you describe the problem?
You said "yes they do" and you're insisting that they do, they do, they do.
But they don't. You can change the label to something nonsensical and the kids who know how to do the work will still get the answer right.
You are not good at evaluating evidence. Part of that is how you've continually misused scientific (and other) terms, concepts, and procedures. When we've said something - and who's correct is easily researched - you have ignored that, and kept right on with your incorrect useages. (For examples, "I have a theory.." and "primary source".)
I am expressing the opinion of myself and the students that I have worked with over the years who take these tests.
Now, that's true. It's an opinion - but the examples you gave don't back up that opinion as being correct.
This opinon is based on experience and outcome. I'm not really sure how to respond to the notion that I don't understand concepts.
Follow our links and look up what words mean. Learn how to follow processes, learn how to evaluate evidence. Follow a consistant method. To revisit "primary source", you give people you claim "primary source" status 100% validity, despite obvious and easily demonstrated issues. But then you totally ignore "primary sources" we point to you, giving them 0% validity. (First, stop misusing the term "primary source", which is an archeological one, and almost never used correctly.
No, students do not grade their own work but
No buts.
Then they're not writing their own assessments.
This, again, is a perfect demonstration of the problem here. You said "I have had students write their own assessments on many occasions."
Now you're saying that that's not correct. Your two comments rebut each other. And yet you don't see that. Look, mistakes occur. People say things incorrectly, make mistakes. It happens. But then you have to admit to that - and clarify which is correct.
One you say that students write their own assessment, the other, you say they never do. These are your comments. They directly contradict each other.
They are totally irrelevant to knowing HOW TO ANSWER THE QUESTIONS
You're missing the point
No, Mark. I understand the material presented. I've taught it before. I've excelled at it when tested.
Based on that, I feel confident that I have the understanding to explain to you that the examples you chose do not illustrate the conclusions you have made. I've conceded that there might be some that would. But I haven't seen them in a long time. Neither of those examples, nor the football addition, is a good one.
If you don't define something, presuming the reader will know the defintion, then yes, that could be biased. But if you give those definitions, and don't put anything in there that requires special knowledge, then no, it's not biased.
and I think you need to ask some students at your school about this. They are the ones that are taking the tests now. I'm not certain I can do an adequate job of explaining this problem to you.
No, you can't, because you're not evaluating the information correctly. The examples you gave don't demonstrate what you've claimed. You can't adequately explain the problem, because the examples you've chosen don't illustrate the problem you're claiming exists.
Notice we're not telling you that bias can't exist - I'm sure we've all experienced it. I have. I've taken a test, in college, with a alcoholic prof who hadn't noticed that the book had changed. We were using a totally different book. One of his questions wasn't in any way covered (it had been moved to the back half of the book). All the bio-majors knew it - from their bio-classes. We know what could be possible. But the examples you picked, on tests that spend huge amounts of time trying to weed out such bias, don't demonstrate that.
But the one constant, as I noted above, is for students to make excuses for their failures.
"Not really" isn't much of an answer.
You're asking a simple question of a much less simple answer. Once you can pass the rote memorization, then you can get to the other. But you've got to be able to get PAST it. So far, you, and your students are getting hung up on irrelevant details.
Unix, I think some of what you say above is getting back into the chest thumping thing and I really don't want to go down that path again. No problem if you have that opinion about me. In the final analysis, the only opinions that matter to me are my students and whether or not they achieve or exceed the standards set out by MN. Preferably, I'd like to see enduring understandings as well. I judge myself based on results and the feeback I get from students and parents. Overall, it has been superior. And this is from parents from all over the political spectrum with a wide variety of views on education.
I also think your opinion of me clouds your judgment. This is evident in your analysis of student written assessments which seems very rigid. They assess themselve all the time, Unix. They write their own exams from time to time. These would be essay exams, not multiple choice, but they do assist me in laying out the framework for the information in the multiple choice tests. They contribute to the writing of the rubric at times. Finally, part of the weight of their grade depends on their peer assessments on group presentations. How intelligent is their feedback? Obviously, I'm handing out their final grade but part of it is based on the work they do to assess themselves. I hope this makes sense.
One other thing to note...if we are going to continue to discuss bias in tests we need to bring in the debate regarding summative and formative assessments. Which has the greater value and why? How do summative assessments assist in taking standadized tests? Are summative tests inherently biased due to their framework? Why or why not? How do formative assessments pass on enduring understandings?
I also think your opinion of me clouds your judgment.
That it might, surely.
So what about my arguments? That's what you cannot understand. You cannot evaluate, and it fails you regularly. My opinion of you is a reflection of that, but you can see, in this thread, a perfect example. You made a claim, didn't understand how to back it up, and can't understand what these things mean, or even clarify ambiguities.
You've contradicted yourself, and you either cannot or will not clarify that. I think it's "can not", because you don't know what words mean. That's my opinion, true. It's colored by 3 years of you misusing words, terms, and dodging proof and objective facts.
Sure. So in this case, who's made the case for their argument? That's again, the ultimate arbiter of the dispute. You're essentially saying "you're biased, thus I'm right". Like you did the tests. "They're biased, so they shouldn't be used".
This is evident in your analysis of student written assessments which seems very rigid.
Yes, from your point of view it probably is. It's because you don't understand the meanings of the words we're using.
They assess themselve all the time, Unix.
As they should. As they should learn to do. But they need to be taught to do so fairly. Correctly. And, Mark, you continually misuse words, concepts, and contradict yourself. How are you going to teach them self-assessment? That, by the way, doesn't disagree with what I said earlier. They should self-assess. But they shouldn't not be the final arbiter, and someone with more knowledge should be judging them on what they've learned. By definition, they're students. They don't know how much they don't know.
As I didn't when I was their age.
Finally, part of the weight of their grade depends on their peer assessments on group presentations. How intelligent is their feedback? Obviously, I'm handing out their final grade but part of it is based on the work they do to assess themselves. I hope this makes sense.
Part of it is fine. But you made absolute claims, and those don't make sense.
Unix, I think some of what you say above is getting back into the chest thumping thing
Only because you do not understand. It's not so much chest thumping, as it's you refusing to understand what's required to make determinations and judgements, and to clarify and get on the same wavelength.
Now, I've conceded your examples might suck - but they suck if they're claiming what you seem to be claiming What a "mine" is is irrelevant to the question asked. Knowledge of football is irrelevant if you provide the needed information.
And at some point, you have to just insist and demand that SOME level of knowledge exists. You might as well make the claim that "water" is biased to someone who grew up in the desert, or "pump" or.. Your complaint is one that's never solveable, because it's open-ended in it's victimization.
That's a problem, and it's endemic in your arguments. You present something, get contradicted, give an example that does not support your conclusion, and you get mad when we hammer you on it. Nor do you ever admit the failings in your arguments!
It's not "chest beating" to remind you of your past failures. 22 versus 15. The FCC's growth. "Primary Sources". The role (and failures) of FEMA. ......
if we are going to continue to discuss bias in tests we need to bring in the debate regarding summative and formative assessments.
You won't admit that "mine" isn't biased, nor the type of seed irrelevant to the ability to read a table and answer questions. We can't really "continue to discuss" it, because we don't have examples of "bias".
What we do have is the ability to leave areas with easily proven and found objective facts. I'm happy when we get those objective facts firmed up, and everyone in agreement. You're trying to bypass that to areas that are highly subjective, largely circularly-reasoned (IMO), and create less clarity.
Let's get those settled, before we leave for mystic pastures. Leaving behind uncertainties, and building foundations upon them, leads to failure.
I also think your opinion of me clouds your judgment.
To re-reply to this, for 3 years you've been posting here Mark, and for 3 years, we've been telling you that your judgement was colored by not understanding root concepts.
This is a near-perfect example where you get distracted onto something that is irrelevant, and insist that it isn't just central, but proves your claims.
Instead of just assuring me it is, explain to me, if you're correct, how knowing what a mine is, affects the ability to conduct the task. Please at least take into account what I and others said above.
If you can't make the point why, rather than say it's somehow our fault, stop. Right there. Instead of blaming us , think about that.
(And yes, I'm sure your students claim problems. They've been indoctrinated by the system to find excuses for failure rather than find ways to succeed. That's more or less, exactly the point that I, and I think Kevin and the others are making.)
Actually, if you go back and read my initial remarks I did point out questions that were not biased at all. No doubt, there are some bias free and easily relateable questions on the ACT. Groups like FairTest have worked very hard to get us to the point that we are at right now so there should be at least some credit for progress.
The central problem I have with your argument is that it seems very strident and authoratarian especially when you consider that my analysis was very brief and only offered a couple of examples. In addition, I don't think that you take into consideration the student's point of view very much. I'd very much like to know if they have the knowledge and standardized tests are riddled with problems in measuring these understandings. Bias is one of them.
A great way to figure out whether or not I am correct in my analysis of the questions above is to do a controlled experiment and see if people that don't know anything about football, peony seeds or mines can still answer the questions correctly. How that would be possible, I do not know. But we do have this.
So now the manner in which testing for bias is carried out is perhaps flawed. And this study was led by a scholar who favors standardized testing. There is a link to the 33 page study. I think you will find it very interesting and illuminating. Let's move on to some more of your comments.
But they shouldn't not be the final arbiter, and someone with more knowledge should be judging them on what they've learned.
I agree. They are not the final arbiter.
By definition, they're students. They don't know how much they don't know.
I'm not really sure what you are saying here. I think I may need further explanation on this but on the surface I disagree. Students are more keenly aware of where they need to be than you might think. Instructors are their guides to knowlege, no doubt, but students can really help in defining what and how they learn. A tool I use quite frequently is called a KWL. It's a sheet of paper with 3 columns with a K, W, and L atop each one. The K stands for "What I Know" about whatever lesson or unit we are starting. The W stands for what I'd like to know. And the L stands for what I learned. This is one of many tools the students can use to track their own learning and see where they need to fill in the gaps. It's also good to review for assessments.
And at some point, you have to just insist and demand that SOME level of knowledge exists.
Right. And that's why we have standard based grading in our state in most school districts. What do you think about standards based grading?
Your complaint is one that's never solveable, because it's open-ended in it's victimization.
That's incorrect. If you look at the math questions I used as examples above, I said they were great. Let's see more questions like those and less that have extraneous information that might dilute learning. I also don't think its victimization...just a poor way to measure knowledge. And the debate over summative and formative assessments very much has everything to do with what we are talking about. They aren't mythic pastures. Standardized tests are summative assessments. Summative assessments themselves may not accurately measure knowledge.
They've been indoctrinated by the system to find excuses for failure rather than find ways to succeed.
I disagree. Most students are desperate to succeed and want to share these achievements with everyone. They may have subjects that they don't like but my experience has been that there is a lot of inspiration and motivation there if instructors take the time to get to know their students and provide a variety of learning opportunities and instructional strategies. Some instructors are lazy and don't do this and that's part of the problem.
When I talk about claiming problems, it's of the Michael Oher variety. They know the information but are frustrated by the limitations of how they can present it. What is your solution for this? Ever heard of Erin Gruwell? She went for some unique solutions and her results were stellar.
"I'm not really sure what you are saying here. I think I may need further explanation on this but on the surface I disagree."
I have stated this numerous times over several years here in Kevin's parlor, and you're only now stating that you don't understand it?
A simple but important principle is taught, or learned the hard way, by engineers. I'll explain it and I'll use examples.
Suppose you design a bridge. You have to contend with "unknowns", meaning things you do not know. There are two types of unknowns: 1) "known unknowns"; and, 2) "unknown unknowns".
No, this is not a joke. Ask any engineer.
An example of a "known unknown" is that there will be some maximum load placed on the bridge, at some point in the future, by vehicles traveling over it. You know that this will happen, but you don't know what its value will be. Thus, this value is something that you don't know, but you know that you don't know it. It is a "known unknown".
An example of an "unknown unknown" is that the steel within the pre-stressed steel/concrete beams is of substandard quality, even though it passed inspection. Because of this, the beams cannot cannot withstand the stresses they were specified for. Thus, this is something that you don't know, but you don't know that you don't know it. It is an "unknown unknown".
An example of the importance of such things should be apparent to you. Recall the I-35 bridge collapse, right?
The statement is that students don't know what they don't know. At this point, that statement ought to be self-explanatory. It is quite simple: There are things that students don't know, but they don't know that they don't know them.
Want a simple example?
A student who is unaware of World War I will not be aware of how World War I led to World War II. For him, that chain of cause-and-effect is an unknown unknown.
"Let's see more questions like those and less that have extraneous information that might dilute learning."
I think you have missed the boat completely. Learning is enhanced by extraneous information, it is not diluted by it.
School is preparation for life beyond school. Life beyond school does not present you with choices to make, problems to solve, or questions to answer, all such that no extraneous information is given. In your terminology, life is not without bias.
In the real world, you are presented daily with choices to make, problems to solve, and questions to answer. Quite often, you are the one who has to decide what the choices are, what the problems are, and what the questions are. In real life, the answers are not in the back of the book, the questions and problems are not in the body of the book, and usually there is no book at all.
In my experience as an engineering student, it was a hallmark of all good engineering problems and exams that more information was given than was needed to solve the problem presented or answer the question asked. Part of the training was learning to sift through the chaff to find the wheat, as it were.
Now, think about why that might be the case. If you are a practicing engineer, and your boss hands you a problem all neatly tied up with a pretty ribbon such that nothing you don't need is tied up within, well, goddamn, dude, why the hell would he need you to solve the problem?
In the real world of engineering, the daily grind is to be told, "Here. Figure out what the problem is, find a solution, and implement it." That BEGINS with wading through the extraneous information to find out what is relevant and what is not.
Been there, done that, and made a successful career of it.
Learning to sift through the garbage to find the core of the matter is part of learning to cope with real life. I am not surprised that you have difficulty with this as a part of teaching, because your writings here over three years are filled with it.
I would have enjoyed watching you try to cope with engineering school as a student. That would have been quite a spectacle.
In my experience as an engineering student, it was a hallmark of all good engineering problems and exams that more information was given than was needed to solve the problem presented or answer the question asked. Part of the training was learning to sift through the chaff to find the wheat, as it were.
This. Even my son's fifth-grade math curriculum (last year) included problems with extra information on a number of occasions. The point of the exercise was not only to compute accurately, but to recognize what information is needed to solve the problem. The curriculum, by the way, was a K12-provided online public school. I had my issues with it, but there's no doubt it was at least an order of magnitude better than the Everyday Math abomination the local brick-and-mortar district insists on using. This year's curriculum (Math+) looks even better so far -- at least it focuses heavily on computation (lots of problems to work).
Actually, if you go back and read my initial remarks I did point out questions that were not biased at all. No doubt, there are some bias free and easily relateable questions on the ACT. Groups like FairTest have worked very hard to get us to the point that we are at right now so there should be at least some credit for progress.
And we're back at first principles and evaluating arguments again.
Mark, you didn't deal with my argument. When you say "actually", after that, you ought to point to an actual something. But you didn't. You didn't deal with them. I am telling you this, so you can learn from it. You didn't deal with the argument, you've sidestepped it.
The entire problem gets back to your ability to frame and follow an argument.
You gave examples that did not prove what you said they did, and you've argued over those minor details, escalated appeals to authority to the very students why by definition we're talking about testing to find their knowledge.
That's a major problem for logical thinking. Let me restate that: You're putting the very people whose knowledge, abilities and skills that are under consideration as authoritative experts.
By the very definition of the process, they're not there yet. That's not to say they have nothing of value - if you think I'm saying that, you're wrong. But you're putting the cart before the horse, so to speak, when you insist that we take at face value complaints by those who can't do the work.
That doesn't mean they don't have valid complaints. But neither does it mean that their complaints are indeed, valid. That's where you're failing, you're giving them complete credibility without contextual comparison.
You gave the examples "of bias" that are bias-free. That's your failure, that's what I can't get across to you.
Let's compare my examples to yours. I said that you couldn't evaluate well, and gave the (reference to the) example where you claimed "more people listened to Rush Limbaugh than watched Network News". Under a minute of Googling, and I had that the most that had ever listened to Rush was 15 million, but the average nightly viewership was 22 million. After I pointed those facts out to you - you didn't revise your comment, nor did you retract your statement. So my example backed up my point. I pointed to (the reference to) our first dispute, where you took issue to Kevin's point that government programs never get smaller, by citing the FCC as an example. Under a minute of Googling (yes, there's a theme here), and I found that compared to 1980 (Since you claimed Reagan gutted them), the FCC's budget had grown (IIRC) 40X. That's my example to back up my contention that you shoot from the lip. You make statements, don't check them first, and don't correct them when someone else checks.
Those examples, to anybody following this, support my point. Your examples do not support your point. In neither case did what a "mine" was or what a "peony seed" is matter to the questions asked and the tasks to be completed.
Now, your examples did serve well for one facet: It backs up what we're saying about the victimhood nature of so much of the teachings, and that basic understanding and knowledge isn't being taught. So in that case, your examples made our original case better than what you thought you'd made with them!
And you don't understand that.
See, I understand what you're saying. I disagree with it, but I can understand what you're trying to say. You're just wrong. And when you can understand why those are bad examples, period, no quibbling, you'll be able to discuss this further.
Because in general, Mark, you have to understand that the non-critical pedagogy kind of thought means you have to make sure you're using the same words, meaning the same thing, to come to an agreement.
That doesn't mean that you will agree.
Kevin and I, for instance, are in near total disagreement with joining and maintaining membership in the NRA.
But he and I are in agreement with (essentially) all the facts. Probably all, but let's leave some wiggle room in case. He and I place different values on the same, agreed facts. So we disagree. And that's normal, that's how the world works. It's why most top-down efforts fail, because they place one set value on all facets, that the people in the system don't agree with. He and I disagree, but because we're in agreement on the facts, it means that understand that, and we can "agree to disagree". We're not "agreeing to disagree" on the facts of the case, we're agreeing that we each have come to a different conclusion based on the same facts.
That's the issue with your evaluations, you don't understand how to back up, make sure that the person is in agreement with you on the facts, on the meanings, on the overall concepts - and when they're not, working out what those differences are, before attempting to go farther into the disagreement and making judgement calls and morality decisions.
You want to discuss advanced heuristics, but you haven't yet understood that if you know the material, those questions aren't biased.
Only if you don't know the material and need a "reason" that's not your fault for not being able to do the work does that come up. For those cases.
And as a "counter-point", let me give you something anecdotal.
My mother is a teacher. (In fact, most of my family is in the teaching profession. Yes, I Know It Well.)
9th & 10th grade science. She doesn't understand many concepts. I've explained to her for years, literally, years, that if you were in microgravity, and you threw a hammer, you'd go backwards. (Technically, rotating around your CoM, but..) If you had a string tied to you and the hammer, when the string came taught, you and the hammer would stop.
Years. "But you're bigger than the hammer"
That's one of many things I've tried to explain to her. But the students love her. There is more to being a teacher than merely being right, or knowing something. There is a lot to getting involved, a lot to motivating, a lot to opening the doors and letting the students discover things. (Her degree is in biology, she understands that, but 9th and 10th grades are in the physics and physical world science-wise here.)
But at the end of the day, she's not a great teacher, because quite often she can't answer "Why". She can teach from the book, and tell them the right answer from the key, but she doesn't truly understand. They don't know that yet. They're, well, kids. She's funny, she's nice, she's easy to tease, she'll look out for them, etc.
It's attributed to Einstein most popularly, but this is very important: "If you can't explain it to a six year old, you don't understand it yourself." What that means, as DJ explained above, is having the ability to sort through the extra information, and distill it down. Cutting out complexities and simplifying. It's not to mean that you can reduce the problem to something that simple, but it means that you can explain the context. "You've got to have enough pumping ability to pull the water out of the hole in the ground - and as it rains, water seeps through the ground and gets in the hole. The bigger the hole, the more water you'll get."
Since my dad was a math teacher for a time, maybe I can make clear something I see by repeating it as I learned it. As I said above, I can at least provisionally agree with the importance of context in many subjects, where psychology, opinion and other subjective evaluations are part of the data. To suggest that the same applies to math suggests a conceptual misunderstanding of what mathematics actually is.
At age 7 I was told math is basically a game you play with numbers, and that if you go far enough into the game you can reach a place where you can make your own rules. True enough, and probably all a 7 year old can handle, conceptually.
At 10 that was clarified. Math is a specialized language, in the same way writing music is a specialized language. The reason musical notation is much the same as it was 300 years ago is because the language describes the sequence of sounds desired with such precision that it is still thoroughly understood today, by people playing instruments that could not possibly be conceived by the composers of 300 years ago.
Math is a specialized language in precisely the same way. It is as complex as it is because math's purpose as language is to describe what is experienced with precision as nearly absolute as possible. The particular thing it's describing makes no difference at all to how the language is used, it's the ultimate Esperanto.
Thus, the Theory of Relativity is, when all is said and done, no more than Einstein's attempt to correct Newton's math. The fact that they had completely different cultural backgrounds, different native languages, and completely different educations, was entirely irrelevant.
You see? It doesn't work unless it is independent of context.
Bias is a given - but what level of bias is acceptable for demonstrating the ability to learn in a relatively ordered society; what skills matter?
Reading the predominant language and comprehending the concepts transmitted thereby; conducting basic math skills including addition, subtraction, multiplication, division, fractions, and percentages; recognizing a basic alphabet, colors, numbers, and a few laws of physics; the ability to communicate in a written form using basic rules, punctuation, and grammar. Failure to learn these basics generally invalidates one's usefulness in either the work force, academia, and often politics, and has the potential to limit one's lifetime self-actualization.
Inability to recognize, learn, and apply these basic rules can hamstring the student entering the world of adulthood, limiting lifetime wages and self-actualization. A few can overcome this by inheritance or marrying well, some can overome through politics, but the vast majority will find themselves without means in a society they are unprepared to fulfill a useful role in, and then turn to fringe or even criminal activity (that is, except for a few tortured artists who will be discovered by "patrons" of means.) That, or predestination to be exploited until death for nothing more than their labor.
We can do better. We used to do much better with much less. We should be ashamed at what we have wrought.
Self-actualization is something I strive to pass on to those whom I mentor. It's high up there on Maslow's hierarchy and far too few of us have it. Actually, it ties into what DJ is discussing above which I agree with wholeheartedly. Students do indeed don't know what they don't know and it is the instructor's job to pass those measuring skills onto their students. Self-actualization is a part of this. DJ's further explanation of what Unix was talking about makes perfect sense. This would be why I use tools like the KWL. It helps to define the playing field of learning.
There is a lot to getting involved, a lot to motivating, a lot to opening the doors and letting the students discover things.
Agreed. This would be a main reason why we have the problems we have...too few teachers do this.
but she doesn't truly understand
That's too bad because that would mean that no enduring understandings were passed on. But this is what I have talking about in this thread...students achieve enduring understandings and do well on tests if they can relate the problems to their own life. Your argument that they need to know it regardless of the context is valid. But will they KNOW it? Will it stay in their long term memory? Or will they not understand like your mother because they don't know what a peony seed is or how to play football. This is why I have problems with summative assessments. I'm not sure they pass on enduring understandings. If a student doesn't know what a peony seed is or a mine or the rules of football, they might still get the answer right but they will remember the one about junk food because it relates to their lives.That one has a greater chance of becoming an enduring understanding.
"This is why I have problems with summative assessments. I'm not sure they pass on enduring understandings."
Why is a test supposed to "pass on" skills to students? I thought the whole point of test was to… oh, what's that word… oh yeah… TEST how well the students had already acquired those skills?
It seems to me that if students actually understand a skill and where such a skill actually applies, then they should be able to apply it even in unfamiliar circumstances. Therefore, it would seem that it's actually better if a test presents a problem in a context which the student isn't familiar with because that would demonstrate the student's ability to adapt and apply those skills without being able to rely on rote memorization.
You can—and imho, should—use familiar or imaginable situations to introduce new skills. Then students should also be taught how to generalize those skills for unfamiliar situations. But that's part of the education process (and intermediate tests to evaluate progress and determine where review is needed), that is not the purpose of a test.
Assessment: 1. the act of assessing; appraisal; evaluation.
"Why is a test supposed to "pass on" skills to students?"
In my experience, a test was always a learning experience, and I mean over and above just improving my skill at taking tests. Regardless of whether or not it was supposed to pass on other skills, it did so.
because they don't know what a peony seed is or how to play football.
The problem with your "point", such as it is, Mark, is that you have yet to get to understanding. No, you can't have an enduring understanding until you have an understanding, we agree.
But you don't understand yet. You don't understand the basics, and you're repeating something that's been rebutted. That's extremely rude, and it indicates that you're not actually evaluating the contenting argument (see above as to my assessment), and you're merely responding automatically without thought.
The type of seed is meaningless to that question. As long as you keep sidestepping either backing up that it does, or retracting that as an "example", you're insulting the other people, and you are reinforcing what I claimed about your ability to evaluate.
Yes, I get to claim that as a "victory", if you keep doing what I've described you doing.
You're so certain that you're right, that you've not established a base level of understanding of the material yet. You don't have mastery of it, you don't understand it. You're eliding past that, and going onto to trying to discuss much more complex issues - but it's obvious you don't have a base understanding, much less an "enduring one".
This is why I have problems with summative assessments.
No, it's not. I'm sorry to speak for you, but in this case, I know the "questions to ask."
You don't understand summative assessments, and their value, and their limits.
There's a reason, Mark, we keep trying to get you to work back to base assumptions and facts. That's because you often skip over making sure that those are correct and consistent.
When you don't understand the foundation, there's no possible way you can understand concepts that build upon it. You might - accidentally - be right on an opinion, but the way you arrived there is erroneous and means that your arrival was accidental, and thus does not add to your authority.
More often than not, it means you won't be at the correct place, even accidentally.
Don't try for more complex arguments when you're in disagreement with someone, go back to the SIMPLER arguments, and try and get agreement on the context, the words, the meanings, the memes. That's the only way you'll ever truly understand their side, and be able to judge it versus yours.
Maybe chest thumping isn't really accurate any more. At this point it seems that you are more focused on me than the actual issue itself. Let's see if we can get back to it again.
I understand your point regarding the irrelevence of context. Students should be able to ignore lack of knowledge of peony seeds, mines, or football and work the problem. Is it accurate to say that your view is that the students are hiding behind their lack of overall knowledge of how to work the problem by complaining about not knowing what these three things are? A dodge, perhaps?
I contend that if the context of the question is altered so the majority of students understand the vocabulary in it, a more accurate representaion of knowledge will reveal itself and enduring understandings will blossom. I offer as evidence the fairtest link above as well as the Aguinis study above,The examples I used above from the ACT site brief descriptions of a much broader issue that certainly has improved. The days of "Regatta" may be over but, if you examine the Aguinis study more closely, the method of measuring for bias in exams has now been called into question.
A side note, I have really enjoyed this entire dialog as it has sharpened my thinking on pedagogy before school begins again.
I contend that if the context of the question is altered so the majority of students understand the vocabulary in it, a more accurate representaion of knowledge will reveal itself and enduring understandings will blossom.
And for every majority you get, you define a minority by ignoring it. The point to having a variety of questions in a variety of contexts is so that a) chances are good there's something that strikes a familiar note, that gives you an insight into how it applies to your life, and b) the vast majority of that variety of questions does not strike a chord with your personal experience, thus requiring you to learn how to make the same tools work the same way, even in an alien context.
What you appear to be suggesting is that if someone learns how to use a hammer to do house framing, it doesn't matter whether he's incompetent to use a hammer to put shingles on a roof... or drive a tent stake.... or hang a picture.
Obviously I disagree. I feel like if you're going to teach someone how to use a hammer, you should expect them to become competent enough that they can use the same hammer the same way in any situation justifying the use of a hammer in the first place.
At this point it seems that you are more focused on me than the actual issue itself. Let's see if we can get back to it again.
No. the problem is that you do not understand what you are trying to be an authority on.
There is no way to separate that from the rest of the discussion. It poisons everything that you attempt, and it fatally flaws all your arguments. it is the root cause of the disagreements, and you are unable to review that, you're unable to even run an example as a hypothetical, with givens that you might not agree with, but for sake of argument, or even to give a relevant example.
But yet you expect us to hold you as an authority. To allow you to self-judge yourself, and grade yourself, based on how you feel you did, not how well you actually did.
There's just no way to separate the two, at the moment.
I understand your point regarding the irrelevence of context. Students should be able to ignore lack of knowledge of peony seeds, mines, or football and work the problem.
No, you don't understand. The knowledge, or lack thereof, is totally irrelevant to demonstrating mastery of the concepts requested.
I've said that many times. You can't understand that, that plainly, yet think you're above average at evaluating!
Those two things do not go together. What I, and others, have been telling you, is you're looking for excuses to not attempt the demonstration of learned skills. In the examples you have given, bias doesn't exist. That's not to say it never does, nor that it's not something you really ought to learn to deal with, but in your chosen examples it does not exist.
I contend that if the context of the question is altered so the majority of students understand the vocabulary in it, a more accurate representaion of knowledge will reveal itself and enduring understandings will blossom.
Then you're wrong. First, you'll never be able to build a question where you can't have someone complain they didn't know that. "I didn't know what water was!". (1a - by the time they get to high school, they should damn well know what a "mine" is, even if that mattered.) Second, you've got enough trouble with changing or coming up with new definitions that you should - well, we can easily see the problems there. But Third, and again, and again, and again, those nouns are irrelevant. They can be removed, changed, modified, or replaced with nonsense words (and often are!) and the question of skills is unchanged.
If you know how to answer the question, the noun won't bother you. "Enduring understanding" is irrelevant to this discussion, everything you needed to know was provided in the questioning.
Other than the ability and practice in doing the actual skill work. Which is what those tests are seeking to determine.
What I'm saying should be very clear. Stop trying to evade and or muddy the waters.
To directly answer your question: I haven't dealt with either the Fairtest "folks" or Aguinus study yet in this thread.
Considering you're still claiming that questions with all the relevant information given are biased, we aren't able to go past that to more advanced areas.
I think what I have said is very plain. Why don't you deal with that, and let's hammer it down to basics we can agree on, and then work on the more advanced stuff?
Why is a test supposed to "pass on" skills to students? I thought the whole point of test was to… oh, what's that word… oh yeah… TEST how well the students had already acquired those skills?
Right. Ed's got it in one. I forgot to back up ENOUGH to basics.
This gets back to how do you measure, how do you assess, and how can you compare. How do you compare what students are learning, which teachers are teaching better than others, and how do you establish reliable measurements?
Remember how I was asking you that before, Mark?
Well, Ed's reminded us both that that is a base question.
Of course, there are a variety of ways to do this through both summative assessments and formative assessments. And you're right, it is difficult to compare not only when some teachers are teaching better than others but when standards vary from state to state. The question is...should we have national standards? That's certainly what Obama wants....a reliable measurement that is established.
I think there is a misunderstanding regarding passing on enduring understandings. Yes, you are measuring how well students have acquired various skills with the assessment. But the assessment should hopefully be reflective of the practical application of these various skills in real life. Many of the complaints on here discuss the fact that students don't have knowledge that relates to even the basics of everyday life. Shouldn't assessments be reflective of this and connect the learned skills with the practical application in reality? And shoulnd't that reality be things that they know and are going to encounter as they move through life?
And shoulnd't that reality be things that they know and are going to encounter as they move through life?
How are you going to know in advance what they are going to encounter in life? I'll grant you, the kid from the back woods who works in the tire shop and is content with that is not as likelyto need advanced maths as the child of a programmer and a music theory teacher who wants to go to college.
But if you don't show the kid working in the tire shop that the math he uses to do wheel alignments when the computerized machinery is on the fritz is in fact the same math the music theorist uses to tell whether or not a chord progression "works" before they ever hear a note... well you have no way to tell whether you just deprived the world of its next Beethoven, do you?
Teaching a subject, I can see the point of making sure the context is within their understanding of the subject, sure. But a lot of what is taught is not a subject so much as a tool, that works in the same way across several subjects. Math is one of these, as is logic/"critical thinking", as is language. With those, if learning is tied to a particular subject to which the tool applies, aren't you limiting their understanding of the tool by failing to show how it works the same way on what they don't understand?
How many people today do you think have a deep understanding of classical Greek culture? Few, I'd venture. How many do you think are conversationally fluent in the various dialects of Greek spoken 2300-2500 years ago? Fewer still, I suspect. And yet an American 12 year old boy, who wants to be an astronomer when he grows up and can't even point out Greece on a map can understand precisely what Aristarchus of Samos was saying, if he knows the math.
Well, the official answer in the year 2010 is high stakes testing. Certainly, they do provide aggregate data on how effective pedagogy is today. In all honesty, I'd like to see high stakes testing for social studies. One of the reasons I got into education was because several young people I had met had no idea how our government worked. One measure of having an enduring understanding, in my opinion, would be a larger percentage of people having a higher level of knowledge (closer to the evaluation and synthesis levels of Bloom's) of the functions of government.
I wholeheartedly agree that another measure should be pedagogy focused on real life situations and dilemnas. If they learn something in school and that knowledge helps them every day for the rest of their lives performing a task or function, that would be an enduring understanding. Measuring this would require a study that correlates the skills learned in school with success in careers. This might be tough due to other mitigating factors.
For social studies, you better believe it. And the consequence of poor results should be fired teachers as opposed to punishing the schools. Right now, NCLB needs to be adjusted a bit so we don't have this scenario any longer.
But the core tenet of being proficient in how our government works is a must. I would agree that high stakes testing is beneficial for other disciplines as well particularly in the area of illustrative and important data. This data let's us know where adjustments need to be made. How they are made, of course, is a matter of debate on effective instructional strategies. Of course, this does not necessarily mean that standardized tests should be the ONLY means of high stakes testing.
One other thought on measuring enduring understanding....look at our society as a whole. Sadly, I find a decided lack of people (the number of which seems to be increasing) that have even a basic understanding of many of the concepts that they should've been taught in school. Of course, we disagree on why this is the case and my thought on this lack of understanding is pure opinion but, honestly, if you look at the lack of functionality in our society in a number of key areas, one has to wonder how many of us actually have enduring understandings of the basics of academics?
This could be fantastic. It addresses many of the various issues I have with standardized tests as well as provides real world situations that many here have requested.
Note:
All avatars and any images or other media embedded in comments were hosted on the JS-Kit website and have been lost;
references to haloscan comments have been partially automatically remapped, but accuracy is not guaranteed and corrections are solicited.
If you notice any problems with this page or wish to have your home page link updated, please contact John Hardin <jhardin@impsec.org>
JS-Kit/Echo comments for article at http://smallestminority.blogspot.com/2010/08/more-evidence-of-our-collapsing-schools.html (95 comments)
Tentative mapping of comments to original article, corrections solicited.
As a science professor, I have to deal with these unprepared kids every semester. Lemme tell you, they are really unprepared. And there's no excuse for it. Math and science are not that difficult if they are taught well and if teachers enforce expectations.
We need to stop this idea that everyone needs to go to college.
Alan: True, but at least 75% of High School graduates should be able to pass the tests.
At least 75%? I'd want 100% of High School graduates to be able to pass the tests. I'd expect a decent chunk to struggle, but if they graduated High Schoole they should be able to get through at least the first year of University. I agree that not everyone should got to Uni, but the first year really isn't that hard, if you've got a High School diploma that should be evidence that you have the accademic skill to go to uni. (Though certainly not necessarily that you have the capability to find Uni easy, just theoretically possible)
"We need to stop this idea that everyone needs to go to college."
When the students don't learn the basic skills they need in high school, they have to go somewhere to get them!
<Conspiracy Mode On>It must be the colleges destroying the public school system, so that more kids will need to go to college.<Silly Conspiracy Mode Off> :-P
The Super Power can't educate its youth properly? Does that become a bigger problem in America's future? Is there an American future?
Perhaps it is true that everyone does not need to go to college but everyone does learn differently and the ACTs are an espeically poor example of measuring knowledge. There is so much bias on that test (and I;m not simply talking racial bias) that it's very difficult to gauge if students really are that lacking in these concepts. The reverse is true as well. I had a student (now in her third year at Notre Dame) who got a perfect score on her ACTs but could not explain to me and one of my colleagues the most fundamental aspects of Einstein's theory of relativity. She knew how to take the test and had answers memorized but had no enduring understanding.
One recent study concluded high school students in 23 countries were outperforming U.S. students in math. Students in 16 countries were outperforming U.S. students in science. And nine countries did better in literacy.
Remember that many of these countries (China for example) are measured by the students that are ALLOWED to be in school not all people of that age. Only the best and brightest are generally allowed to participate in these assessments as opposed to everyone who is measured here.So their pool of participants is restricted and heavily biased.
Here's some more information which is useful and very true.
http://www.boston.com/bostonglobe/ideas/articles/2008/10/26/grade_change/?page=1
Oh, and in this new spirit of cooperation, I have changed my avatar to my favorite pic of Eve Myles which you Torchwood fans will certainly enjoy.
Mark,
You are correct that direct comparisons between the U.S. and other countries is misleading. In Europe, for example, only the top-notch college-bound students even go to high school. The dumbest/least-motivated quit after 9th grade and the rest go on to vocational schools.
Would you please give specifics on how the ACT is biased? Particularly, how does this bias manifest in terms of math and science?
Also, I've never taken the ACT, but my understanding is that you can't memorize the science portion because it's a reasoning test -- i.e. it tests your ability to read graphs, infer, spot trends, etc. -- not a test of rote material. In fact, one of the test help sites says "no specific background knowledge is required" for this section. Memorization would not have helped your student.
That statement is misleading.
There's no such thing as "high school" or "college" in most of Europe.
Our educational system is structured quite differently, and is just as broken (and of course there are major differences between countries too).
My husband grew up in Scandinavia and he said he went to "high school" and "university" there.
The point is -- and Mark was making a valid one, for once -- that direct comparisons between students in the U.S. and overseas is not really possible.
Wait... Can you explain how a MATH test can be biased? Math is a HARD SCIENCE. 1 + 1 will always equal 2, and that stability doesn't change as you go up through higher math. Once you get up to some of the more theoretic math maybe, I don't know, I have a math disability and was surprised to pass even college algebra, but that level of math isn't on the ACT anyway. Biased my ass.
s
I stopped at the local BBQ tonight (we needed a fix, y'see). The two Sweet Young Things were yammering with each other as they rang up my sale.
One allowed as how she was allergic to lots of things. She explained, "I just say I'm allergic to things I don't like; cottage cheese, nuts, school ..."
I couldn't help myself.
"You don't like school?"
"Nope."
"How come?"
"Because NOW we have to actually DO the work."
Oh shit.
"What school is that?"
"College."
"What grade?"
"Junior."
"What major?"
"Elementary education."
Oh shit.
And THAT is QotD.
Hey, I made it!
Congrats! I've been spewing drivel here for years and still haven't been QotD ;)
Sarah, here's a link that you might be useful and answers part of your questions. The scores suggest a gender bias and this site offers an explanation.
http://www.fairtest.org/gender-bias-college-admissions-tests
Now, let's take a look at a sample question.
http://www.actstudent.org/qotd/
The first paragraph alone details a setting that would be completely foreign to many test takers and I'm not discussing race here at all. Lines 23-25 contain imagery that has no relation whatsoever to many students lives. In fact, the entire context of this passage is so out of touch with the youth of today, how could there be any serious analysis done?
Of course, this doesn't even begin to get into the shallow bias of analyzing a work of fiction with a #^%#^%# multiple choice question. Ridiculous!
Let's look at some math questions
http://www.actstudent.org/sampletest/math/math_01.html
Not too bad but question #7, Set 1 asks about mines. If you don't know what a mine is, does that make it more difficult to answer the question? Question #11, Set 1 is very good, though. This would be a type of question that would pass on an enduring understanding. Question #11 in Set 2 is good as well....what is it with the number 11? 8-)
Now let's take a look at science.
http://www.actstudent.org/sampletest/science/sci_03.html
Passage 3...what are peony seeds? I honestly don't know....I don't garden...do they have a special property that other seeds don't? Look at the date on this question...1985....really?
Something else to consider that was brought up in the first link I put up....40 questions in 35 minutes. Are you kidding me? So, it's now a race to be right first? How does time factor in to a science test? It was my understanding that science was more about being deliberate....thoughtful...patient...testing re-testing....careful analysis...how can you do that when you are rushed? How is that an accurate measure of knowledge?
Finally, I would have to say that there is an overall bias when you measure knowlege based on multiple choice tests. Certainly, an individual with logical-mathmatical intelligence would excel. But what about someone with kinesthetic intelligence? Remember, Gardner first came up with this theory while watching a basketball game and reasoning that the "dumb jocks" were actually quite knowledgeable in the areas of geometry and statistics. He saw them learning, with their bodies, the areas on the court where they could be most effective and score the most points.
In regards to my old student, I was speaking primarily of the math portion regarding memorization. In other sections, it wasn't so much memorization but she had no real enduring understandings. It was simply that she knew how to take the test and reason within its narrow minded structure. Reasoning outside of it was more difficult for her and we were fascinated by how little she learned or knew based on her score.
Mark, most of the basic stuff in math requires only memorization.
Yes, you're right that doing well on a college entrance exam is no guarantee that the student is intellectually prepared, but bombing the exam is virtually a guarantee that the student is not prepared.
Thankfully, many college admissions no longer require ACT or SAT scores although scholarship programs still do.
Nonetheless, it appears that what you're saying is that while the ability to pass the test is no assurance that the test taker has the necessary background knowledge, the inability to take the test is still a solid indicator that one does not have the requisite knowledge. Or if you like, while it may not be as useful as you'd wish for who "makes the cut", it's still dead on the money in terms of who doesn't.
Which, to me, still seems pretty useful for weeding out those who shouldn't be going to college.
Not too bad but question #7, Set 1 asks about mines. If you don't know what a mine is, does that make it more difficult to answer the question?
I suggest that if you've gone through twelve years of primary education and don't know what a MINE IS, then your primary education system has failed you utterly.
Oh, and you're not qualified to go to college. So the test question is valid. This is true of your other examples as well.
Which is what Grumpy said.
I'm not seeing Mark's comment except when I click on the details of his profile. Weird.
I second Kevin's comment, but even if you don't know what mines or peony seeds are, why should that inhibit you from answering the questions? I have seen tests given to students with deliberate nonsense words in them ("If you have two flibbles and one is faster than the other ... ") to see if students know how to reason with no context whatsoever. And, by the way, a peony is a flower. See? The test question is gender-biased in my favor.
As for gender biases on exams, it doesn't seem to hurt acceptance and matriculation rates for women. The university I teach at admitted 66% female vs. 34% male students this year.
Schooling is not the same as education. Our kids are being overschooled, but remain woefully uneducated.
I've no personal knowledge of mines, nor peonies, but it is through reading that people become acquainted with things outside their experience.
This crap about tests being biased because a subset of kids, for instance, have no idea of what a "sloop" is, is ridiculous.
The first paragraph alone details a setting that would be completely foreign to many test takers and I'm not discussing race here at all.
I have no idea what you're talking about - that's a link to a question that changes daily. Today it's talking about ions.
But for the rest of the ones you pointed to, as Sarah said, so?
I don't need to know what a mine is for the question asked.
Nor what a peony seed is.
Those don't matter, what matters is can you do the math. The description is irrelevant. In the first, something is filled with water, based on an analogous example, what will you need to clear it? In the second, something happened, based on changes - what happened?
Passage 3...what are peony seeds? I honestly don't know....I don't garden...do they have a special property that other seeds don't? Look at the date on this question...1985....really?
Has math changed since 1985?
do they have a special property that other seeds don't?
Doesn't matter. If you know how to do the math, then you'd know that. (That would only be indicated by the same experiment with a different seed, and comparing that data. You're missing the forest by running into trees. You're looking for reasons that the questions would be "biased", but you don't understand the math enough to know that it doesn't matter.
How does time factor in to a science test? It was my understanding that science was more about being deliberate....thoughtful...patient...testing re-testing....careful analysis...how can you do that when you are rushed? How is that an accurate measure of knowledge?
And there, you're trying to do it again.
Science is about consideration, deliberation. But it's not about being slow on purpose. Either you know the material, or you do not. In my experience, I finished those tests in 1/2 he time allotted or less. If you don't know it, then you deliberate, stall, guess, try and find meanings, etc.
But you're complaining about standardized testings being biased, because there's a limit. That's the point of a "standard". Can you answer these questions in the time allotted, and how many can you get correct? Then we compare that versus others.
But what about someone with kinesthetic intelligence?
Heck, why not say that football players are actually experts at land management?
If you're playing 21, that matters. If you're trying to test understanding of math and related subjects, it fails miserably.
I can catch a fly ball on the baseball field. That's a 2nd derivative, and includes an accurate estimation of velocity without true measurement. Doesn't mean I understand the concepts of derivation, or how to calculate it. (And in fact, I had a hellofa time playing outfield in softball - because of all my practice with baseballs, the size of the ball tricked my kinesthetic-practiced mind.)
As I told you, the standardized tests shows you quite accurately some things. How well you can play against Michael Jordan isn't one of them. In college, I roomed with 3 members of the basketball team for a semester, and hung out with the team often. 1 of whom went on to be a NBA superstar. Trust me. He didn't know math, calculus, geometry, or land management. He was a really nice guy, and he hustled his ass off. But he didn't know math, despite his learned skill.
"How does time factor in to a science test? It was my understanding that science was more about being deliberate....thoughtful...patient...testing re-testing....careful analysis...how can you do that when you are rushed? How is that an accurate measure of knowledge? "
You aren't doing science, you are answering questions about science. And doing science today means following the scientific method. Remember that?
Succinctly distilled explanation:
1) Ask a Question
2) Do Background Research
3) Construct a Hypothesis
4) Test Your Hypothesis by Doing an Experiment
5) Analyze Your Data and Draw a Conclusion
6) Communicate Your Results
The time required is dependent on what you are doing, the test questions are aimed at the last 2 steps, so it shouldn't require a lot of time, hence the 40 questions in 35 minutes.
For grins I worked through the sample from the link provided. I'm amazed the peony seeds tripped you up. The subject matter was immaterial to the data and questions.
As to being rushed, welcome to the real world. Deadlines are a part of it.
Thankfully, many college admissions no longer require ACT or SAT scores although scholarship programs still do.
That's news to me. (And to my school.)
But the SAT is designed with a purpose in mind. To predict future scholarship. And it does a not-bad job of that, really. No, it doesn't totally show everything about a student. But it used to do a pretty good job. Which is why it keeps getting the ire, and redesigned, to be less "biased".
But you're not understanding what "bias" means. Accurately predicting is called biased when it shows minorities performing differently (and worse) than "whites" (and asians, for some reason, are treated badly here, just like Jews.) It must be wrong!
50% of our entering black males will fail or drop out before the end of their 3rd year. The simple fact that the SAT could have predicted this is overlooked, amid cries of racism, and it's obviously a problem - not that the black males are coming unprepared, or they've been promoted beyond their proven knowledge.
This, by the way, is not the case with our black females, who graduate at upwards of 85%. (it's hard to track this, because a lot of students come here to get 2 years cheap, and then transfer to more prestigious schools.)
But if you look at the SAT scores - you could, if you presume SAT is a decent measure of future performance - guess that. The black male SAT scores are considerably lower than the black female average.
Not liking what the standardized tests tells you is a wholly different matter than the standardized tests being wrong.
It's news to me, too, Unix.
SATs and ACTs are an excellent way to weed out the most defficient students. It may not be a precise predictor of success, but it's a pretty good predictor of who is unprepared.
I guess I can see many of your points here. I think, though, that if you put yourself in the mind of the student and then ask yourself...are these tests valid and are they reliable? By pointing out the the words and the context used in each question, I am simply stating that I don't think that it is an accurate measurement of what we are looking for as far as student's abilities. It may be reliable but I don't think it is valid.
Something else to consider...one of my favorite lines..."American education is a mile wide and an inch deep." Many school districts require their teachers to cover far more material than is necessary. As a result, students have a very rudimentary if not outright shallow understanding of basic concepts. This would be another reason why many countries in East Asia out perform US students. They cover topics in a very in depth fashion and, as a result, reason better in the science section of the ACT, for example.
How can the test be reliable and not valid?
"How can the test be reliable and not valid?"
By producing the same meaningless result every time it is used.
But that isn't what happens, is it?
"American education is a mile wide and an inch deep."
So, a student should be expected to know what a mine is, right?
At least what is strip mining!
"This would be another reason why many countries in East Asia out perform US students."
WaitAMinute! Weren't you the person who pointed out the article describing the different standards of measurement in different countries? That seems to contradict what you just claimed.
It's actually both. Only certain students are allowed to be tested...the higher caliber ones...and those that are tested are taught with a depth that is needed here. We can adhere to standards all we want but if those standards call for too many topics to be covered, will there ever be any enduring understandings? I think not.
DJ and Sarah-they are meaningless if they aren't accurate measures of learning. Certain people take these tests quite well. That doesn't necessarily mean they KNOW the answers. And people that do poorly on the tests may know the material and have a full grasp of it but have test anxiety and so they do poorly. I'm not saying we necessarily need to change the way we test to accomodate these students but I am saying that these test scores are not a valid representation of the knowledge of our youth. Thus, they are not to be looked at as "gospel."
I guess I can see many of your points here.
No, I don't think you can.
I think, though, that if you put yourself in the mind of the student and then ask yourself...are these tests valid and are they reliable?
"In the mind of the student?" The student is the one being tested. Pretty much by definition they're not the ones who choose how to prove mastery of the material!
The question isn't one to ask the student - the question is one to be settled by the evidence.
By pointing out the the words and the context used in each question, I am simply stating that I don't think that it is an accurate measurement of what we are looking for as far as student's abilities.
And you're simply ignoring that the words and "context" were irrelevant.
Mark, If I hand you 5 shoobles, and you have 3 shoobles already, how many shoobles do you have?
Do you need to know what a shooble *is* to do that math? No. The label is irrelevant.
Same for peony seeds and mine definitions. Now, if those questions asked something that would require that to be known, for instance, in a question instead of asking about "germination" which is in the table, saying instead "sprouted" - that would be potentially valid. That requires one to know about seeds and their growth cycle. (Even so, that's still weak - but that's the sort of context that really could matter.
But they didn't. They asked the student to look at a table, and do some very basic amount of reading of the table, and what it meant.
There's no bias in those questions, because if you know the material, if you know how to perform the requested action, the labels are irrelevant. Utterly.
It may be reliable but I don't think it is valid.
Yes, we know. Believe me, we know. That's why we're showing you why it is. The labels are utterly irrelevant, you're becoming distracted on them. They - in those questions - don't matter.
They're very valid - it's a simple ratio question for one, and reading a table for another. It's not a problem if you understand that. It's only a "problem" if you're trying to read into it something that's not there.
See here, you've got a case where you made a judgement, and the evidence you're using to back it up doesn't prove your point. (There might be some that would.) But you are still saying "Never mind the facts, I made my mind up." In the spirit of your new tone, look at that, and accept, at least for the sake of argument, that they are valid. What does that mean?
"It may be reliable but I don't think it is valid."
What in the world does that even mean?
Does the ACT test for high school level comprehension of English, mathematics, reading, and science? Does it do so consistently from student to student? If so, but you are still not happy with it, is there a different set of English, mathematics, reading, and science you'd like tested? Ones not at high school level?
Or do you just not like the fact that a standardized test is showing what Kevin et al. keep saying, education is broken in the US?
"Many school districts require their teachers to cover far more material than is necessary."
I see now, our kids are over educated!
Pretty much by definition they're not the ones who choose how to prove mastery of the material!
I disagree. I am a proponent of learner centered instruction. I have had students write their own assessments on many occasions. I help them write the rubrics that gauge these assessments. Believe it or not, these assessments are usually ridiculous in difficulty. Given the chance, students are much harder on themselves than instructors would ever be.
The labels are utterly irrelevant, you're becoming distracted on them.
No, they are relevant but not the end all and be all of everything. They are one of many problems that test takers encounter every day. I would suggest spending some time talking to the students at your university and ask them what they thought about the validity of their tests. Also, spend a half hour or so reading through the fair test link above. They have a ton of examples of various bias in assessments.
Actually, what we are really talking about here is the value of formative assessment versus the value of summative assessment.
http://www.fairtest.org/value-formative-assessment-pdf
I tend to fall more on the formative side and it seems, based on your comments, that you fall more on the summative side. Is that true?
Mark,
I read your 'Gender Bias' link. Tell me you don't seriously see any flaws in the case they make. It talks about lots of reasons males and females score differently on these tests, but only offers excuses that blame the supposed bias in the tests without presenting any other reasons outside of the test being the fault.
The basic premise is flawed in that they try to compare actual class grades in relation to test scores. As long as teachers and professors make partial class grade based upon things like attendance, preparation, classroom interaction and assignment completion they are grading on things other than actual knowledge of the material. Not to say that some of those items are not important to instill and proper items of instruction. But a test of material knowledge is a test of the knowledge, not whether you did your homework last night.
The site you reference did bring up the hard-wired functional differences in gender, but failed to present how those differences could impact the sites basic premise. Males (especially young males) are less likely be prepared for class, less likely to do their homework, more likely to interact poorly or even disrupt class, more likely to have poor class attendance.
Rather than assuming that the test has a gender bias because females have better class grades but worse test scores, perhaps you might want to think that the gender bias starts in the class because those scores don't accurately reflect the higher test scores shown by males.
As an anecdote, I had essentially the same SAT score as a girl in high school, yet she brought home straight A's while I had a 3 GPA. She was a valedictorian, I wasn't. By the theory of “fairtest”, because she showed 'better' prior performance she should have scored better than I on the SAT. By my theory, based upon observation, she had better “prior performance” because she didn't skip school to go to work, she turned in her homework every night and she did every bit of extra credit offered. We scored the same on class final exams and our knowledge of the material was essentially equal. The fact that we had a similar grasp of the knowledge was 'proven' by the test scores, the fact that she had better grades was based upon what I am now going to call (just for you) “activity based gender bias”. That is, scores that are biased based upon activity that one gender is more likely to perform.
Here is another link that discusses fairness of assessments. It also illuminates the validity/reliability issue further...
http://www.ncrel.org/sdrs/areas/issues/methods/assment/as5relia.htm
A valid task should: reflect actual knowledge or performance, not test-taking skill and memorized algorithms; engage and motivate students to perform to the best of their ability; be consistent with current educational theory and practice; be reviewed by experts to judge content quality and authenticity.
"If a task is set in the context of football and students who have a knowledge of football have an advantage on the task, that knowledge is an extraneous factor. The context becomes a biasing factor if particular groups of students know less about football than other groups of students. For example, in this society few girls have experience playing football. If boys, in general, have experience with the game and more knowledge of its structure and rules, then the task could be biased in favor of boys."
For a task to be fair, its content, context, and performance expectations should: reflect knowledge, values, and experiences that are equally familiar and appropriate to all students; tap knowledge and skills that all students have had adequate time to acquire; be as free as possible of cultural, ethnic, and gender stereotypes.
and
Validity is defined as "an indication of how well an assessment actually measures what it is supposed to measure." The chapter identifies three aspects of an assessment that must be evaluated for validity: tasks, extraneous interference, and consequences.
Reliability is defined as "an indication of the consistency of scores across evaluators or over time." An assessment is considered reliable when the same results occur regardless of when the assessment occurs or who does the scoring. There should be compelling evidence to show that results are consistent across raters and across scoring occasions.
GuardDuck-no doubt people use the test as an excuse from time to time. And it's near impossible to achieve some of these goals listed above. There will always be bias in tests. The goal is to diminish that bias and make the questions as neutral as possible.
I just saw this comment--
It's news to me, too, Unix.
http://www.fairtest.org/university/optional
SATs and ACTs are an excellent way to weed out the most defficient students. It may not be a precise predictor of success, but it's a pretty good predictor of who is unprepared.
See my comments above regarding formative and summative assessment. I'd be interested to see where you fall between the two.
Pretty much by definition they're not the ones who choose how to prove mastery of the material!
I disagree.
And you've demonstrated that you don't know how to evaluate knowledge. How to weigh evidence. I'm all for respecting your change in tone, but we've got 3 years - that's continuing now - with proof that you don't understand concepts.
I am a proponent of learner centered instruction.
OK.
I have had students write their own assessments on many occasions.
The students should never be writing their own assessments. Now, there's a (pretty good) possibility that you're misusing the word "assessment". Assessment = grading. Judgement. Determination on what they've done, and where it compares to the expectations.
By the very definition of the concept, they can't assess themselves, if they can assess themselves, then they should be teaching the classes. That's not to say the teacher has to be the ultimate expert - but they should be more of an expert than the students.
If the student's don't know something, how can they grade themselves as deficient? By definition, they don't know what they don't know!
Yes, this has a lot of similarity with your past - including a refusal and inability to re-access when deficiencies are demonstrated.
This, and the other continuation (5000 characters my ass) is me.
OK, good example there.
Did those 2 comments have 5000 characters? Yes, or no?
I help them write the rubrics that gauge these assessments. Believe it or not, these assessments are usually ridiculous in difficulty. Given the chance, students are much harder on themselves than instructors would ever be.
It's one thing for the students to assist with setting goals. Assessment is judging how well they MET THOSE GOALS.
The labels are utterly irrelevant, you're becoming distracted on them.
No, they are relevant but not the end all and be all of everything.
No, they're irrelevant. That's why I demonstrated that. You declare something, but don't back it up. Why does the knowledge of what a mine is or what a peony seed is change the result? I don't know what a peony seed is and I can answer the questions. The questions aren't about mines, or seeds, it's about knowing how to do math, and read tables. The labels do not matter in that regard. Irrelevant.
They are one of many problems that test takers encounter every day.
They are totally irrelevant to knowing HOW TO ANSWER THE QUESTIONS
Actually, what we are really talking about here is the value of formative assessment versus the value of summative assessment.
I tend to fall more on the formative side and it seems, based on your comments, that you fall more on the summative side. Is that true?
Not really. You don't understand, which makes it really hard to explain it to you. If you're still insisting that the label of "mine" or "peony seed" shows bias and prejudices the test, then you don't understand. Period.
A valid task should: reflect actual knowledge or performance, not test-taking skill and memorized algorithms;
At some point, you have to test students on their memorized algorithms. No, not all the time, but it's damn important to remember some of them and have the memorized.
But even so, the examples you gave above did not have that problem. The equasion and information was given.
engage and motivate students to perform to the best of their ability; be consistent with current educational theory and practice; be reviewed by experts to judge content quality and authenticity.
The first bolded part is self-referential. the second, note it doesn't specify knowledge or completion of work, correctly.
"If a task is set in the context of football and students who have a knowledge of football have an advantage on the task, that knowledge is an extraneous factor. The context becomes a biasing factor if particular groups of students know less about football than other groups of students. For example, in this society few girls have experience playing football. If boys, in general, have experience with the game and more knowledge of its structure and rules, then the task could be biased in favor of boys."
Which would only be true if the task required specific football knowledge. If you didn't tell in the question, that a safety is 2 points, that might be valid. (But not really, most women know football rules just fine, that's just anti-female bigotry)
But using an example of football where you give all the required information does not have that problem.
Just as above, the knowledge of what a mine is is irrelevant to the task demanded - the information needed was given. Same for peony seeds. The information to answer the questions was all given. If you know how to read a chart.
That's the task demanded.
I would suggest spending some time talking to the students at your university and ask them what they thought about the validity of their tests
Mark:
I can't ignore the students. I talk to them all the time. And they complain, quite often, of the things I complained about when I was their age.
Until I discovered that, by gum, yes, that stupid shit is important. And I spend a lot of my time telling the students my war stories and why that learning that piddly, picky shit will make their lives FAR EASIER in the future.
Just like people told me that when I was their age. And when I listened, I'm not very thankful. And when I didn't, I say "you know.. if I knew then what I know now..."
We really don't need callow youth setting any sort of academic standards. I've found living out in the real world were stuff needs to get done ASAP is a bracing tonic to ward off some of youth's worst excesses.
If a task is set in the context of football and students who have a knowledge of football have an advantage on the task, that knowledge is an extraneous factor. The context becomes a biasing factor if particular groups of students know less about football than other groups of students. For example, in this society few girls have experience playing football. If boys, in general, have experience with the game and more knowledge of its structure and rules, then the task could be biased in favor of boys."
As U-J has noted above, often (usually?) the extraneous details such as "in the context of football" are completely irrelevant. Wht the paragraph you posted above suggests to me is that students aren't being taught how to recognize which details are and are not germane to the problem at hand.
And when it comes right down to it, it can be argued that problems where you have no understanding of contextual details are a vital part of the teaching process. They teach the student how to make the same tools perform the same functions, even in a completely unfamiliar context.
After all, there's not much point educating someone to be competent only in areas they already thoroughly understand, is there? Such would tend to refine existing knowledge and techniques and completely fail to produce new innovations, would it not?
"Wht the paragraph you posted above suggests to me is that students aren't being taught how to recognize which details are and are not germane to the problem at hand."
and
"And when it comes right down to it, it can be argued that problems where you have no understanding of contextual details are a vital part of the teaching process. They teach the student how to make the same tools perform the same functions, even in a completely unfamiliar context.
After all, there's not much point educating someone to be competent only in areas they already thoroughly understand, is there? Such would tend to refine existing knowledge and techniques and completely fail to produce new innovations, would it not?"
BRAVO!
In real life, thinking is involved, the answer is not in the back of the book, and there is no one standing by to pat you on the ego and instantly tell you whether or not you got the correct answer. TEACHING students is about preparing students for that, i.e. for the living in the real world as adults. TESTING students, particularly at the end of twelve years of teaching and learning, is about determining whether or not the teaching and learning were successful.
And you've demonstrated that you don't know how to evaluate knowledge. How to weigh evidence. I'm all for respecting your change in tone, but we've got 3 years - that's continuing now - with proof that you don't understand concepts.
That's simply not true. I'm very skilled at evaluating knowledge. I am expressing the opinion of myself and the students that I have worked with over the years who take these tests. This opinon is based on experience and outcome. I'm not really sure how to respond to the notion that I don't understand concepts.
The students should never be writing their own assessments. Now, there's a (pretty good) possibility that you're misusing the word "assessment". Assessment = grading. Judgement. Determination on what they've done, and where it compares to the expectations.
No, students do not grade their own work but we do have peer review in our class all the time that does figure into whether or not they meet or exceed the standards. But they do help to decide the information on which they would like to be assessed and I decide if it is reflective of the work we have done. As I stated above, more often than not, it leans more to the rigorously difficult and I have to alter it. They are also given rubrics for every major assignment and many times we talk about the method of assessment...mutliple choice, short essay, long essay, or presentation. The latter is chosen quite a bit as they love to work with power point.
If the student's don't know something, how can they grade themselves as deficient? By definition, they don't know what they don't know!
Because the job of any good assessment is to gauge learning and rank it according to the standard. They may know "something" but how in depth do they know it? Do they have an enduring understanding? This would be why rubrics are provided so they know where their knowledge stands prior to the assessment.
It's one thing for the students to assist with setting goals. Assessment is judging how well they MET THOSE GOALS.
Agreed. And that's why we have the rubrics
They are totally irrelevant to knowing HOW TO ANSWER THE QUESTIONS
You're missing the point and I think you need to ask some students at your school about this. They are the ones that are taking the tests now. I'm not certain I can do an adequate job of explaining this problem to you.
Not really. You don't understand, which makes it really hard to explain it to you. If you're still insisting that the label of "mine" or "peony seed" shows bias and prejudices the test, then you don't understand. Period.
Alright, well, what do you think about formative assessments? Summative assessments? "Not really" isn't much of an answer. I knew we'd have some issues here when I brought up the test bias debate. I would urge you to spend a half hour or so sifting throught that fair test site and learn more about context of testing. Bias has a much larger meaning than mines or peony seeds. This are two examples in a sea of many.
but it's damn important to remember some of them and have the memorized.
Long term or short term memory? The former is what one should strive for to produce an enduring understanding. I think with the rest of your comment you're really not putting yourself in the mind of the student who doesn't know about football. And you'd be surprised how many people in general (let alone women) who don't know the rules of football. Again, ask the students at your college. Better yet, ask some at the local public schools. See what they say. Context does matter and your points saying that it doesn't may work for some students but not all of them. Don't you think that it's possible that the student that answers the question wrong may have had the knowledge to do the work but the context made it more difficult?
I'm in a school district of 50,000 children. State average ACT 22.1 - one high school is elebrating an 18.1 as an "improvement." District is now implementing a new grade strategy designed to reward students for not studying or doing homework assignments. Easier to get a "C" without any work, harder for kids to get an "A" for extending themselves. Individualism is dead - we will all regret the equality of misery to come. Why do I stay? There's a few kids who likely will lead the militia of tomorrow.
I'm not a scientist, I make no claim of being a scientist, what I'm posting below may be just a way of showing how far out of my depth I am. All you guys who are scientists, please correct me if I'm wrong.
I think with the rest of your comment you're really not putting yourself in the mind of the student who doesn't know about football. And you'd be surprised how many people in general (let alone women) who don't know the rules of football.
I can see this as a valid point of argument for subjects like history, where at least to some extent, opinions are part of the data. In mathematics and science, not so much. In fact, for the math section of any assessment, it should be important to know if the student is capable of ignoring non-essential context. Why? Because if they can't demonstrate that, you can't be assured that the vector calculus used by the astonomer, the nuclear physicist and the missile designer are all the same mathematical tools performing the same operations in the same way.
Such details matter when you're planning Mars missions and such, I bet.
"Such details matter when you're planning Mars missions and such, I bet."
And when you're building a bridge, or balancing a checkbook, or making change, or any number of things.
In my unhumble opinion, the worst thing you can do to a student is to always dumb down what you ask him such that he never has to learn how to arrive at an answer.
And when you're building a bridge, or balancing a checkbook, or making change, or any number of things.
Well so far I've never needed vector calculus to balance my checkbook or make change. Perhaps I've managed to be evil enough, white enough and Republican enough, but haven't gotten to be rich enough.
;)
Don't get me wrong, I suspect I'd really like to be so rich I needed to learn vector calculus to balance my checkbook...
:-$
"... details matter ..."
And balancing a checkbook is about details, isn't it? It's not about opinions, is it?
When the calculus is done, it becomes arithmetic, and arithmetic is all details.
Speaking of details, this comment went up as "Guest", but it was me, so I deleted it and re-posted it. Sigh ...
It's my opinion that my account has a billion dollars in it.
It's my bank's records that show I have a rounding error in my account.
"Don't get me wrong, I suspect I'd really like to be so rich I needed to learn vector calculus to balance my checkbook..."
I know a number of very wealthy people (and worked for some for a very long time), and I do not envy them. Being wealthy is more work than working to try to get wealthy.
I'd rather be secure, stable, and anonymous. Am there, am that.
And you've demonstrated that you don't know how to evaluate knowledge. How to weigh evidence. I'm all for respecting your change in tone, but we've got 3 years - that's continuing now - with proof that you don't understand concepts.
That's simply not true. I'm very skilled at evaluating knowledge.
I'm sorry Mark, but is is true. It's what we've been yelling at you for 3 years now. You are not skilled at evaluating knowledge. We have three years of constant drumbeats of us pointing out to you failures in your methodology, in your collections, in everything, and you still can't understand.
You are not skilled at evaluating knowledge. I'm sorry. This is a perfect example of why you shouldn't judge yourself. Why you can't measure yourself on the scale. It's why I, and many others, have sent you to the study that demonstrates the more incompetent you are, the more confident you are at your self-assessments.
The examples you gave do not depend on the labels. Notice that all of us who know how to do the work said that? It matters now how you describe the problem?
You said "yes they do" and you're insisting that they do, they do, they do.
But they don't. You can change the label to something nonsensical and the kids who know how to do the work will still get the answer right.
You are not good at evaluating evidence. Part of that is how you've continually misused scientific (and other) terms, concepts, and procedures. When we've said something - and who's correct is easily researched - you have ignored that, and kept right on with your incorrect useages. (For examples, "I have a theory.." and "primary source".)
I am expressing the opinion of myself and the students that I have worked with over the years who take these tests.
Now, that's true. It's an opinion - but the examples you gave don't back up that opinion as being correct.
This opinon is based on experience and outcome. I'm not really sure how to respond to the notion that I don't understand concepts.
Follow our links and look up what words mean. Learn how to follow processes, learn how to evaluate evidence. Follow a consistant method. To revisit "primary source", you give people you claim "primary source" status 100% validity, despite obvious and easily demonstrated issues. But then you totally ignore "primary sources" we point to you, giving them 0% validity. (First, stop misusing the term "primary source", which is an archeological one, and almost never used correctly.
No, students do not grade their own work but
No buts.
Then they're not writing their own assessments.
This, again, is a perfect demonstration of the problem here. You said "I have had students write their own assessments on many occasions."
Now you're saying that that's not correct. Your two comments rebut each other. And yet you don't see that. Look, mistakes occur. People say things incorrectly, make mistakes. It happens. But then you have to admit to that - and clarify which is correct.
One you say that students write their own assessment, the other, you say they never do. These are your comments. They directly contradict each other.
Now, what was that about evaluating?
They are totally irrelevant to knowing HOW TO ANSWER THE QUESTIONS
You're missing the point
No, Mark. I understand the material presented. I've taught it before. I've excelled at it when tested.
Based on that, I feel confident that I have the understanding to explain to you that the examples you chose do not illustrate the conclusions you have made. I've conceded that there might be some that would. But I haven't seen them in a long time. Neither of those examples, nor the football addition, is a good one.
If you don't define something, presuming the reader will know the defintion, then yes, that could be biased. But if you give those definitions, and don't put anything in there that requires special knowledge, then no, it's not biased.
and I think you need to ask some students at your school about this. They are the ones that are taking the tests now. I'm not certain I can do an adequate job of explaining this problem to you.
No, you can't, because you're not evaluating the information correctly. The examples you gave don't demonstrate what you've claimed. You can't adequately explain the problem, because the examples you've chosen don't illustrate the problem you're claiming exists.
Notice we're not telling you that bias can't exist - I'm sure we've all experienced it. I have. I've taken a test, in college, with a alcoholic prof who hadn't noticed that the book had changed. We were using a totally different book. One of his questions wasn't in any way covered (it had been moved to the back half of the book). All the bio-majors knew it - from their bio-classes. We know what could be possible. But the examples you picked, on tests that spend huge amounts of time trying to weed out such bias, don't demonstrate that.
But the one constant, as I noted above, is for students to make excuses for their failures.
"Not really" isn't much of an answer.
You're asking a simple question of a much less simple answer. Once you can pass the rote memorization, then you can get to the other. But you've got to be able to get PAST it. So far, you, and your students are getting hung up on irrelevant details.
Unix, I think some of what you say above is getting back into the chest thumping thing and I really don't want to go down that path again. No problem if you have that opinion about me. In the final analysis, the only opinions that matter to me are my students and whether or not they achieve or exceed the standards set out by MN. Preferably, I'd like to see enduring understandings as well. I judge myself based on results and the feeback I get from students and parents. Overall, it has been superior. And this is from parents from all over the political spectrum with a wide variety of views on education.
I also think your opinion of me clouds your judgment. This is evident in your analysis of student written assessments which seems very rigid. They assess themselve all the time, Unix. They write their own exams from time to time. These would be essay exams, not multiple choice, but they do assist me in laying out the framework for the information in the multiple choice tests. They contribute to the writing of the rubric at times. Finally, part of the weight of their grade depends on their peer assessments on group presentations. How intelligent is their feedback? Obviously, I'm handing out their final grade but part of it is based on the work they do to assess themselves. I hope this makes sense.
One other thing to note...if we are going to continue to discuss bias in tests we need to bring in the debate regarding summative and formative assessments. Which has the greater value and why? How do summative assessments assist in taking standadized tests? Are summative tests inherently biased due to their framework? Why or why not? How do formative assessments pass on enduring understandings?
I also think your opinion of me clouds your judgment.
That it might, surely.
So what about my arguments? That's what you cannot understand. You cannot evaluate, and it fails you regularly. My opinion of you is a reflection of that, but you can see, in this thread, a perfect example. You made a claim, didn't understand how to back it up, and can't understand what these things mean, or even clarify ambiguities.
You've contradicted yourself, and you either cannot or will not clarify that. I think it's "can not", because you don't know what words mean. That's my opinion, true. It's colored by 3 years of you misusing words, terms, and dodging proof and objective facts.
Sure. So in this case, who's made the case for their argument? That's again, the ultimate arbiter of the dispute. You're essentially saying "you're biased, thus I'm right". Like you did the tests. "They're biased, so they shouldn't be used".
This is evident in your analysis of student written assessments which seems very rigid.
Yes, from your point of view it probably is. It's because you don't understand the meanings of the words we're using.
They assess themselve all the time, Unix.
As they should. As they should learn to do. But they need to be taught to do so fairly. Correctly. And, Mark, you continually misuse words, concepts, and contradict yourself. How are you going to teach them self-assessment? That, by the way, doesn't disagree with what I said earlier. They should self-assess. But they shouldn't not be the final arbiter, and someone with more knowledge should be judging them on what they've learned. By definition, they're students. They don't know how much they don't know.
As I didn't when I was their age.
Finally, part of the weight of their grade depends on their peer assessments on group presentations. How intelligent is their feedback? Obviously, I'm handing out their final grade but part of it is based on the work they do to assess themselves. I hope this makes sense.
Part of it is fine. But you made absolute claims, and those don't make sense.
Unix, I think some of what you say above is getting back into the chest thumping thing
Only because you do not understand. It's not so much chest thumping, as it's you refusing to understand what's required to make determinations and judgements, and to clarify and get on the same wavelength.
Now, I've conceded your examples might suck - but they suck if they're claiming what you seem to be claiming What a "mine" is is irrelevant to the question asked. Knowledge of football is irrelevant if you provide the needed information.
And at some point, you have to just insist and demand that SOME level of knowledge exists. You might as well make the claim that "water" is biased to someone who grew up in the desert, or "pump" or.. Your complaint is one that's never solveable, because it's open-ended in it's victimization.
That's a problem, and it's endemic in your arguments. You present something, get contradicted, give an example that does not support your conclusion, and you get mad when we hammer you on it. Nor do you ever admit the failings in your arguments!
It's not "chest beating" to remind you of your past failures. 22 versus 15. The FCC's growth. "Primary Sources". The role (and failures) of FEMA. ......
You don't do that well.
if we are going to continue to discuss bias in tests we need to bring in the debate regarding summative and formative assessments.
You won't admit that "mine" isn't biased, nor the type of seed irrelevant to the ability to read a table and answer questions. We can't really "continue to discuss" it, because we don't have examples of "bias".
What we do have is the ability to leave areas with easily proven and found objective facts. I'm happy when we get those objective facts firmed up, and everyone in agreement. You're trying to bypass that to areas that are highly subjective, largely circularly-reasoned (IMO), and create less clarity.
Let's get those settled, before we leave for mystic pastures. Leaving behind uncertainties, and building foundations upon them, leads to failure.
I also think your opinion of me clouds your judgment.
To re-reply to this, for 3 years you've been posting here Mark, and for 3 years, we've been telling you that your judgement was colored by not understanding root concepts.
This is a near-perfect example where you get distracted onto something that is irrelevant, and insist that it isn't just central, but proves your claims.
Instead of just assuring me it is, explain to me, if you're correct, how knowing what a mine is, affects the ability to conduct the task. Please at least take into account what I and others said above.
If you can't make the point why, rather than say it's somehow our fault, stop. Right there. Instead of blaming us , think about that.
(And yes, I'm sure your students claim problems. They've been indoctrinated by the system to find excuses for failure rather than find ways to succeed. That's more or less, exactly the point that I, and I think Kevin and the others are making.)
So what about my arguments?
Actually, if you go back and read my initial remarks I did point out questions that were not biased at all. No doubt, there are some bias free and easily relateable questions on the ACT. Groups like FairTest have worked very hard to get us to the point that we are at right now so there should be at least some credit for progress.
The central problem I have with your argument is that it seems very strident and authoratarian especially when you consider that my analysis was very brief and only offered a couple of examples. In addition, I don't think that you take into consideration the student's point of view very much. I'd very much like to know if they have the knowledge and standardized tests are riddled with problems in measuring these understandings. Bias is one of them.
A great way to figure out whether or not I am correct in my analysis of the questions above is to do a controlled experiment and see if people that don't know anything about football, peony seeds or mines can still answer the questions correctly. How that would be possible, I do not know. But we do have this.
http://www.insidehighered.com/news/2010/08/02/testing
So now the manner in which testing for bias is carried out is perhaps flawed. And this study was led by a scholar who favors standardized testing. There is a link to the 33 page study. I think you will find it very interesting and illuminating. Let's move on to some more of your comments.
But they shouldn't not be the final arbiter, and someone with more knowledge should be judging them on what they've learned.
I agree. They are not the final arbiter.
By definition, they're students. They don't know how much they don't know.
I'm not really sure what you are saying here. I think I may need further explanation on this but on the surface I disagree. Students are more keenly aware of where they need to be than you might think. Instructors are their guides to knowlege, no doubt, but students can really help in defining what and how they learn. A tool I use quite frequently is called a KWL. It's a sheet of paper with 3 columns with a K, W, and L atop each one. The K stands for "What I Know" about whatever lesson or unit we are starting. The W stands for what I'd like to know. And the L stands for what I learned. This is one of many tools the students can use to track their own learning and see where they need to fill in the gaps. It's also good to review for assessments.
And at some point, you have to just insist and demand that SOME level of knowledge exists.
Right. And that's why we have standard based grading in our state in most school districts. What do you think about standards based grading?
Your complaint is one that's never solveable, because it's open-ended in it's victimization.
That's incorrect. If you look at the math questions I used as examples above, I said they were great. Let's see more questions like those and less that have extraneous information that might dilute learning. I also don't think its victimization...just a poor way to measure knowledge. And the debate over summative and formative assessments very much has everything to do with what we are talking about. They aren't mythic pastures. Standardized tests are summative assessments. Summative assessments themselves may not accurately measure knowledge.
They've been indoctrinated by the system to find excuses for failure rather than find ways to succeed.
I disagree. Most students are desperate to succeed and want to share these achievements with everyone. They may have subjects that they don't like but my experience has been that there is a lot of inspiration and motivation there if instructors take the time to get to know their students and provide a variety of learning opportunities and instructional strategies. Some instructors are lazy and don't do this and that's part of the problem.
When I talk about claiming problems, it's of the Michael Oher variety. They know the information but are frustrated by the limitations of how they can present it. What is your solution for this? Ever heard of Erin Gruwell? She went for some unique solutions and her results were stellar.
"I'm not really sure what you are saying here. I think I may need further explanation on this but on the surface I disagree."
I have stated this numerous times over several years here in Kevin's parlor, and you're only now stating that you don't understand it?
A simple but important principle is taught, or learned the hard way, by engineers. I'll explain it and I'll use examples.
Suppose you design a bridge. You have to contend with "unknowns", meaning things you do not know. There are two types of unknowns: 1) "known unknowns"; and, 2) "unknown unknowns".
No, this is not a joke. Ask any engineer.
An example of a "known unknown" is that there will be some maximum load placed on the bridge, at some point in the future, by vehicles traveling over it. You know that this will happen, but you don't know what its value will be. Thus, this value is something that you don't know, but you know that you don't know it. It is a "known unknown".
An example of an "unknown unknown" is that the steel within the pre-stressed steel/concrete beams is of substandard quality, even though it passed inspection. Because of this, the beams cannot cannot withstand the stresses they were specified for. Thus, this is something that you don't know, but you don't know that you don't know it. It is an "unknown unknown".
An example of the importance of such things should be apparent to you. Recall the I-35 bridge collapse, right?
The statement is that students don't know what they don't know. At this point, that statement ought to be self-explanatory. It is quite simple: There are things that students don't know, but they don't know that they don't know them.
Want a simple example?
A student who is unaware of World War I will not be aware of how World War I led to World War II. For him, that chain of cause-and-effect is an unknown unknown.
And that is a tip of a huge iceberg.
"Let's see more questions like those and less that have extraneous information that might dilute learning."
I think you have missed the boat completely. Learning is enhanced by extraneous information, it is not diluted by it.
School is preparation for life beyond school. Life beyond school does not present you with choices to make, problems to solve, or questions to answer, all such that no extraneous information is given. In your terminology, life is not without bias.
In the real world, you are presented daily with choices to make, problems to solve, and questions to answer. Quite often, you are the one who has to decide what the choices are, what the problems are, and what the questions are. In real life, the answers are not in the back of the book, the questions and problems are not in the body of the book, and usually there is no book at all.
In my experience as an engineering student, it was a hallmark of all good engineering problems and exams that more information was given than was needed to solve the problem presented or answer the question asked. Part of the training was learning to sift through the chaff to find the wheat, as it were.
Now, think about why that might be the case. If you are a practicing engineer, and your boss hands you a problem all neatly tied up with a pretty ribbon such that nothing you don't need is tied up within, well, goddamn, dude, why the hell would he need you to solve the problem?
In the real world of engineering, the daily grind is to be told, "Here. Figure out what the problem is, find a solution, and implement it." That BEGINS with wading through the extraneous information to find out what is relevant and what is not.
Been there, done that, and made a successful career of it.
Learning to sift through the garbage to find the core of the matter is part of learning to cope with real life. I am not surprised that you have difficulty with this as a part of teaching, because your writings here over three years are filled with it.
I would have enjoyed watching you try to cope with engineering school as a student. That would have been quite a spectacle.
In my experience as an engineering student, it was a hallmark of all good engineering problems and exams that more information was given than was needed to solve the problem presented or answer the question asked. Part of the training was learning to sift through the chaff to find the wheat, as it were.
This. Even my son's fifth-grade math curriculum (last year) included problems with extra information on a number of occasions. The point of the exercise was not only to compute accurately, but to recognize what information is needed to solve the problem. The curriculum, by the way, was a K12-provided online public school. I had my issues with it, but there's no doubt it was at least an order of magnitude better than the Everyday Math abomination the local brick-and-mortar district insists on using. This year's curriculum (Math+) looks even better so far -- at least it focuses heavily on computation (lots of problems to work).
So what about my arguments?
Actually, if you go back and read my initial remarks I did point out questions that were not biased at all. No doubt, there are some bias free and easily relateable questions on the ACT. Groups like FairTest have worked very hard to get us to the point that we are at right now so there should be at least some credit for progress.
And we're back at first principles and evaluating arguments again.
Mark, you didn't deal with my argument. When you say "actually", after that, you ought to point to an actual something. But you didn't. You didn't deal with them. I am telling you this, so you can learn from it. You didn't deal with the argument, you've sidestepped it.
The entire problem gets back to your ability to frame and follow an argument.
You gave examples that did not prove what you said they did, and you've argued over those minor details, escalated appeals to authority to the very students why by definition we're talking about testing to find their knowledge.
That's a major problem for logical thinking. Let me restate that: You're putting the very people whose knowledge, abilities and skills that are under consideration as authoritative experts.
By the very definition of the process, they're not there yet. That's not to say they have nothing of value - if you think I'm saying that, you're wrong. But you're putting the cart before the horse, so to speak, when you insist that we take at face value complaints by those who can't do the work.
That doesn't mean they don't have valid complaints. But neither does it mean that their complaints are indeed, valid. That's where you're failing, you're giving them complete credibility without contextual comparison.
You gave the examples "of bias" that are bias-free. That's your failure, that's what I can't get across to you.
Let's compare my examples to yours. I said that you couldn't evaluate well, and gave the (reference to the) example where you claimed "more people listened to Rush Limbaugh than watched Network News". Under a minute of Googling, and I had that the most that had ever listened to Rush was 15 million, but the average nightly viewership was 22 million. After I pointed those facts out to you - you didn't revise your comment, nor did you retract your statement. So my example backed up my point. I pointed to (the reference to) our first dispute, where you took issue to Kevin's point that government programs never get smaller, by citing the FCC as an example. Under a minute of Googling (yes, there's a theme here), and I found that compared to 1980 (Since you claimed Reagan gutted them), the FCC's budget had grown (IIRC) 40X. That's my example to back up my contention that you shoot from the lip. You make statements, don't check them first, and don't correct them when someone else checks.
Those examples, to anybody following this, support my point. Your examples do not support your point. In neither case did what a "mine" was or what a "peony seed" is matter to the questions asked and the tasks to be completed.
Now, your examples did serve well for one facet: It backs up what we're saying about the victimhood nature of so much of the teachings, and that basic understanding and knowledge isn't being taught. So in that case, your examples made our original case better than what you thought you'd made with them!
And you don't understand that.
See, I understand what you're saying. I disagree with it, but I can understand what you're trying to say. You're just wrong. And when you can understand why those are bad examples, period, no quibbling, you'll be able to discuss this further.
Because in general, Mark, you have to understand that the non-critical pedagogy kind of thought means you have to make sure you're using the same words, meaning the same thing, to come to an agreement.
That doesn't mean that you will agree.
Kevin and I, for instance, are in near total disagreement with joining and maintaining membership in the NRA.
But he and I are in agreement with (essentially) all the facts. Probably all, but let's leave some wiggle room in case. He and I place different values on the same, agreed facts. So we disagree. And that's normal, that's how the world works. It's why most top-down efforts fail, because they place one set value on all facets, that the people in the system don't agree with. He and I disagree, but because we're in agreement on the facts, it means that understand that, and we can "agree to disagree". We're not "agreeing to disagree" on the facts of the case, we're agreeing that we each have come to a different conclusion based on the same facts.
That's the issue with your evaluations, you don't understand how to back up, make sure that the person is in agreement with you on the facts, on the meanings, on the overall concepts - and when they're not, working out what those differences are, before attempting to go farther into the disagreement and making judgement calls and morality decisions.
You want to discuss advanced heuristics, but you haven't yet understood that if you know the material, those questions aren't biased.
Only if you don't know the material and need a "reason" that's not your fault for not being able to do the work does that come up. For those cases.
And as a "counter-point", let me give you something anecdotal.
My mother is a teacher. (In fact, most of my family is in the teaching profession. Yes, I Know It Well.)
9th & 10th grade science. She doesn't understand many concepts. I've explained to her for years, literally, years, that if you were in microgravity, and you threw a hammer, you'd go backwards. (Technically, rotating around your CoM, but..) If you had a string tied to you and the hammer, when the string came taught, you and the hammer would stop.
Years. "But you're bigger than the hammer"
That's one of many things I've tried to explain to her. But the students love her. There is more to being a teacher than merely being right, or knowing something. There is a lot to getting involved, a lot to motivating, a lot to opening the doors and letting the students discover things. (Her degree is in biology, she understands that, but 9th and 10th grades are in the physics and physical world science-wise here.)
But at the end of the day, she's not a great teacher, because quite often she can't answer "Why". She can teach from the book, and tell them the right answer from the key, but she doesn't truly understand. They don't know that yet. They're, well, kids. She's funny, she's nice, she's easy to tease, she'll look out for them, etc.
It's attributed to Einstein most popularly, but this is very important: "If you can't explain it to a six year old, you don't understand it yourself." What that means, as DJ explained above, is having the ability to sort through the extra information, and distill it down. Cutting out complexities and simplifying. It's not to mean that you can reduce the problem to something that simple, but it means that you can explain the context. "You've got to have enough pumping ability to pull the water out of the hole in the ground - and as it rains, water seeps through the ground and gets in the hole. The bigger the hole, the more water you'll get."
when the string came taught
Ack.
Taut.
Get me on a meme.....
Since my dad was a math teacher for a time, maybe I can make clear something I see by repeating it as I learned it. As I said above, I can at least provisionally agree with the importance of context in many subjects, where psychology, opinion and other subjective evaluations are part of the data. To suggest that the same applies to math suggests a conceptual misunderstanding of what mathematics actually is.
At age 7 I was told math is basically a game you play with numbers, and that if you go far enough into the game you can reach a place where you can make your own rules. True enough, and probably all a 7 year old can handle, conceptually.
At 10 that was clarified. Math is a specialized language, in the same way writing music is a specialized language. The reason musical notation is much the same as it was 300 years ago is because the language describes the sequence of sounds desired with such precision that it is still thoroughly understood today, by people playing instruments that could not possibly be conceived by the composers of 300 years ago.
Math is a specialized language in precisely the same way. It is as complex as it is because math's purpose as language is to describe what is experienced with precision as nearly absolute as possible. The particular thing it's describing makes no difference at all to how the language is used, it's the ultimate Esperanto.
Thus, the Theory of Relativity is, when all is said and done, no more than Einstein's attempt to correct Newton's math. The fact that they had completely different cultural backgrounds, different native languages, and completely different educations, was entirely irrelevant.
You see? It doesn't work unless it is independent of context.
Bias is a given - but what level of bias is acceptable for demonstrating the ability to learn in a relatively ordered society; what skills matter?
Reading the predominant language and comprehending the concepts transmitted thereby; conducting basic math skills including addition, subtraction, multiplication, division, fractions, and percentages; recognizing a basic alphabet, colors, numbers, and a few laws of physics; the ability to communicate in a written form using basic rules, punctuation, and grammar. Failure to learn these basics generally invalidates one's usefulness in either the work force, academia, and often politics, and has the potential to limit one's lifetime self-actualization.
Inability to recognize, learn, and apply these basic rules can hamstring the student entering the world of adulthood, limiting lifetime wages and self-actualization. A few can overcome this by inheritance or marrying well, some can overome through politics, but the vast majority will find themselves without means in a society they are unprepared to fulfill a useful role in, and then turn to fringe or even criminal activity (that is, except for a few tortured artists who will be discovered by "patrons" of means.) That, or predestination to be exploited until death for nothing more than their labor.
We can do better. We used to do much better with much less. We should be ashamed at what we have wrought.
And Ronin hits one out of the park. Well said!
Self-actualization is something I strive to pass on to those whom I mentor. It's high up there on Maslow's hierarchy and far too few of us have it. Actually, it ties into what DJ is discussing above which I agree with wholeheartedly. Students do indeed don't know what they don't know and it is the instructor's job to pass those measuring skills onto their students. Self-actualization is a part of this. DJ's further explanation of what Unix was talking about makes perfect sense. This would be why I use tools like the KWL. It helps to define the playing field of learning.
There is a lot to getting involved, a lot to motivating, a lot to opening the doors and letting the students discover things.
Agreed. This would be a main reason why we have the problems we have...too few teachers do this.
but she doesn't truly understand
That's too bad because that would mean that no enduring understandings were passed on. But this is what I have talking about in this thread...students achieve enduring understandings and do well on tests if they can relate the problems to their own life. Your argument that they need to know it regardless of the context is valid. But will they KNOW it? Will it stay in their long term memory? Or will they not understand like your mother because they don't know what a peony seed is or how to play football. This is why I have problems with summative assessments. I'm not sure they pass on enduring understandings. If a student doesn't know what a peony seed is or a mine or the rules of football, they might still get the answer right but they will remember the one about junk food because it relates to their lives.That one has a greater chance of becoming an enduring understanding.
because they don't know what a peony seed is or how to play football.
At least you're not dropping your irony in your relaunch.
"This is why I have problems with summative assessments. I'm not sure they pass on enduring understandings."
Why is a test supposed to "pass on" skills to students? I thought the whole point of test was to… oh, what's that word… oh yeah… TEST how well the students had already acquired those skills?
It seems to me that if students actually understand a skill and where such a skill actually applies, then they should be able to apply it even in unfamiliar circumstances. Therefore, it would seem that it's actually better if a test presents a problem in a context which the student isn't familiar with because that would demonstrate the student's ability to adapt and apply those skills without being able to rely on rote memorization.
You can—and imho, should—use familiar or imaginable situations to introduce new skills. Then students should also be taught how to generalize those skills for unfamiliar situations. But that's part of the education process (and intermediate tests to evaluate progress and determine where review is needed), that is not the purpose of a test.
Assessment: 1. the act of assessing; appraisal; evaluation.
"Why is a test supposed to "pass on" skills to students?"
In my experience, a test was always a learning experience, and I mean over and above just improving my skill at taking tests. Regardless of whether or not it was supposed to pass on other skills, it did so.
Ok, perhaps that was too flip.
because they don't know what a peony seed is or how to play football.
The problem with your "point", such as it is, Mark, is that you have yet to get to understanding. No, you can't have an enduring understanding until you have an understanding, we agree.
But you don't understand yet. You don't understand the basics, and you're repeating something that's been rebutted. That's extremely rude, and it indicates that you're not actually evaluating the contenting argument (see above as to my assessment), and you're merely responding automatically without thought.
The type of seed is meaningless to that question. As long as you keep sidestepping either backing up that it does, or retracting that as an "example", you're insulting the other people, and you are reinforcing what I claimed about your ability to evaluate.
Yes, I get to claim that as a "victory", if you keep doing what I've described you doing.
You're so certain that you're right, that you've not established a base level of understanding of the material yet. You don't have mastery of it, you don't understand it. You're eliding past that, and going onto to trying to discuss much more complex issues - but it's obvious you don't have a base understanding, much less an "enduring one".
This is why I have problems with summative assessments.
No, it's not. I'm sorry to speak for you, but in this case, I know the "questions to ask."
You don't understand summative assessments, and their value, and their limits.
There's a reason, Mark, we keep trying to get you to work back to base assumptions and facts. That's because you often skip over making sure that those are correct and consistent.
When you don't understand the foundation, there's no possible way you can understand concepts that build upon it. You might - accidentally - be right on an opinion, but the way you arrived there is erroneous and means that your arrival was accidental, and thus does not add to your authority.
More often than not, it means you won't be at the correct place, even accidentally.
Don't try for more complex arguments when you're in disagreement with someone, go back to the SIMPLER arguments, and try and get agreement on the context, the words, the meanings, the memes. That's the only way you'll ever truly understand their side, and be able to judge it versus yours.
Maybe chest thumping isn't really accurate any more. At this point it seems that you are more focused on me than the actual issue itself. Let's see if we can get back to it again.
I understand your point regarding the irrelevence of context. Students should be able to ignore lack of knowledge of peony seeds, mines, or football and work the problem. Is it accurate to say that your view is that the students are hiding behind their lack of overall knowledge of how to work the problem by complaining about not knowing what these three things are? A dodge, perhaps?
I contend that if the context of the question is altered so the majority of students understand the vocabulary in it, a more accurate representaion of knowledge will reveal itself and enduring understandings will blossom. I offer as evidence the fairtest link above as well as the Aguinis study above,The examples I used above from the ACT site brief descriptions of a much broader issue that certainly has improved. The days of "Regatta" may be over but, if you examine the Aguinis study more closely, the method of measuring for bias in exams has now been called into question.
A side note, I have really enjoyed this entire dialog as it has sharpened my thinking on pedagogy before school begins again.
I contend that if the context of the question is altered so the majority of students understand the vocabulary in it, a more accurate representaion of knowledge will reveal itself and enduring understandings will blossom.
And for every majority you get, you define a minority by ignoring it. The point to having a variety of questions in a variety of contexts is so that a) chances are good there's something that strikes a familiar note, that gives you an insight into how it applies to your life, and b) the vast majority of that variety of questions does not strike a chord with your personal experience, thus requiring you to learn how to make the same tools work the same way, even in an alien context.
What you appear to be suggesting is that if someone learns how to use a hammer to do house framing, it doesn't matter whether he's incompetent to use a hammer to put shingles on a roof... or drive a tent stake.... or hang a picture.
Obviously I disagree. I feel like if you're going to teach someone how to use a hammer, you should expect them to become competent enough that they can use the same hammer the same way in any situation justifying the use of a hammer in the first place.
At this point it seems that you are more focused on me than the actual issue itself. Let's see if we can get back to it again.
No. the problem is that you do not understand what you are trying to be an authority on.
There is no way to separate that from the rest of the discussion. It poisons everything that you attempt, and it fatally flaws all your arguments. it is the root cause of the disagreements, and you are unable to review that, you're unable to even run an example as a hypothetical, with givens that you might not agree with, but for sake of argument, or even to give a relevant example.
But yet you expect us to hold you as an authority. To allow you to self-judge yourself, and grade yourself, based on how you feel you did, not how well you actually did.
There's just no way to separate the two, at the moment.
I understand your point regarding the irrelevence of context. Students should be able to ignore lack of knowledge of peony seeds, mines, or football and work the problem.
No, you don't understand. The knowledge, or lack thereof, is totally irrelevant to demonstrating mastery of the concepts requested.
I've said that many times. You can't understand that, that plainly, yet think you're above average at evaluating!
Those two things do not go together. What I, and others, have been telling you, is you're looking for excuses to not attempt the demonstration of learned skills. In the examples you have given, bias doesn't exist. That's not to say it never does, nor that it's not something you really ought to learn to deal with, but in your chosen examples it does not exist.
I contend that if the context of the question is altered so the majority of students understand the vocabulary in it, a more accurate representaion of knowledge will reveal itself and enduring understandings will blossom.
Then you're wrong. First, you'll never be able to build a question where you can't have someone complain they didn't know that. "I didn't know what water was!". (1a - by the time they get to high school, they should damn well know what a "mine" is, even if that mattered.) Second, you've got enough trouble with changing or coming up with new definitions that you should - well, we can easily see the problems there. But Third, and again, and again, and again, those nouns are irrelevant. They can be removed, changed, modified, or replaced with nonsense words (and often are!) and the question of skills is unchanged.
If you know how to answer the question, the noun won't bother you. "Enduring understanding" is irrelevant to this discussion, everything you needed to know was provided in the questioning.
Other than the ability and practice in doing the actual skill work. Which is what those tests are seeking to determine.
So, Unix are you saying that folks at FairTest (link above) are making points that aren't valid? What about the Aguinis study? Also not valid?
So, Unix are you saying
What I'm saying should be very clear. Stop trying to evade and or muddy the waters.
To directly answer your question: I haven't dealt with either the Fairtest "folks" or Aguinus study yet in this thread.
Considering you're still claiming that questions with all the relevant information given are biased, we aren't able to go past that to more advanced areas.
I think what I have said is very plain. Why don't you deal with that, and let's hammer it down to basics we can agree on, and then work on the more advanced stuff?
Why is a test supposed to "pass on" skills to students? I thought the whole point of test was to… oh, what's that word… oh yeah… TEST how well the students had already acquired those skills?
Right. Ed's got it in one. I forgot to back up ENOUGH to basics.
This gets back to how do you measure, how do you assess, and how can you compare. How do you compare what students are learning, which teachers are teaching better than others, and how do you establish reliable measurements?
Remember how I was asking you that before, Mark?
Well, Ed's reminded us both that that is a base question.
Of course, there are a variety of ways to do this through both summative assessments and formative assessments. And you're right, it is difficult to compare not only when some teachers are teaching better than others but when standards vary from state to state. The question is...should we have national standards? That's certainly what Obama wants....a reliable measurement that is established.
I think there is a misunderstanding regarding passing on enduring understandings. Yes, you are measuring how well students have acquired various skills with the assessment. But the assessment should hopefully be reflective of the practical application of these various skills in real life. Many of the complaints on here discuss the fact that students don't have knowledge that relates to even the basics of everyday life. Shouldn't assessments be reflective of this and connect the learned skills with the practical application in reality? And shoulnd't that reality be things that they know and are going to encounter as they move through life?
And shoulnd't that reality be things that they know and are going to encounter as they move through life?
How are you going to know in advance what they are going to encounter in life? I'll grant you, the kid from the back woods who works in the tire shop and is content with that is not as likely to need advanced maths as the child of a programmer and a music theory teacher who wants to go to college.
But if you don't show the kid working in the tire shop that the math he uses to do wheel alignments when the computerized machinery is on the fritz is in fact the same math the music theorist uses to tell whether or not a chord progression "works" before they ever hear a note... well you have no way to tell whether you just deprived the world of its next Beethoven, do you?
Teaching a subject, I can see the point of making sure the context is within their understanding of the subject, sure. But a lot of what is taught is not a subject so much as a tool, that works in the same way across several subjects. Math is one of these, as is logic/"critical thinking", as is language. With those, if learning is tied to a particular subject to which the tool applies, aren't you limiting their understanding of the tool by failing to show how it works the same way on what they don't understand?
How many people today do you think have a deep understanding of classical Greek culture? Few, I'd venture. How many do you think are conversationally fluent in the various dialects of Greek spoken 2300-2500 years ago? Fewer still, I suspect. And yet an American 12 year old boy, who wants to be an astronomer when he grows up and can't even point out Greece on a map can understand precisely what Aristarchus of Samos was saying, if he knows the math.
I think there is a misunderstanding regarding passing on enduring understandings.
Yup.
Yes, you are measuring how well students have acquired various skills with the assessment.
So how do you measure "enduring understanding?", now that you've agreed with us on that.
Well, the official answer in the year 2010 is high stakes testing. Certainly, they do provide aggregate data on how effective pedagogy is today. In all honesty, I'd like to see high stakes testing for social studies. One of the reasons I got into education was because several young people I had met had no idea how our government worked. One measure of having an enduring understanding, in my opinion, would be a larger percentage of people having a higher level of knowledge (closer to the evaluation and synthesis levels of Bloom's) of the functions of government.
I wholeheartedly agree that another measure should be pedagogy focused on real life situations and dilemnas. If they learn something in school and that knowledge helps them every day for the rest of their lives performing a task or function, that would be an enduring understanding. Measuring this would require a study that correlates the skills learned in school with success in careers. This might be tough due to other mitigating factors.
Well, the official answer in the year 2010 is high stakes testing.
Is that your answer?
You'd test enduring knowledge that way, if it was up to you?
For social studies, you better believe it. And the consequence of poor results should be fired teachers as opposed to punishing the schools. Right now, NCLB needs to be adjusted a bit so we don't have this scenario any longer.
http://www.janebluestein.com/articles/football.html
But the core tenet of being proficient in how our government works is a must. I would agree that high stakes testing is beneficial for other disciplines as well particularly in the area of illustrative and important data. This data let's us know where adjustments need to be made. How they are made, of course, is a matter of debate on effective instructional strategies. Of course, this does not necessarily mean that standardized tests should be the ONLY means of high stakes testing.
One other thought on measuring enduring understanding....look at our society as a whole. Sadly, I find a decided lack of people (the number of which seems to be increasing) that have even a basic understanding of many of the concepts that they should've been taught in school. Of course, we disagree on why this is the case and my thought on this lack of understanding is pure opinion but, honestly, if you look at the lack of functionality in our society in a number of key areas, one has to wonder how many of us actually have enduring understandings of the basics of academics?
You'd test enduring knowledge that way, if it was up to you?
For social studies, you better believe it.
Though you didn't ask it, my answer would be "Repeat the exact same test in some period of time, and see what the delta is."
I'm not sure if this thread is dead but check this out....
http://www.nytimes.com/2010/09/03/education/03testing.html?_r=1&scp=2&sq=us%20education%20department&st=cse
This could be fantastic. It addresses many of the various issues I have with standardized tests as well as provides real world situations that many here have requested.
Note: All avatars and any images or other media embedded in comments were hosted on the JS-Kit website and have been lost; references to haloscan comments have been partially automatically remapped, but accuracy is not guaranteed and corrections are solicited.
If you notice any problems with this page or wish to have your home page link updated, please contact John Hardin <jhardin@impsec.org>