Opinion: Public Education Is Failing Our Students

by | Oct 20, 2022

Submissions are welcome – send to [email protected]. The last day to submit an opinion column (or letter to the editor) about the Nov. 8 election is Oct. 25, two weeks before election day (11/8). 

By Jonathan Martin, M.D.

On Oct. 17, 1979, the US Department of Education was created; 43 years and trillions of tax dollars later, our children’s proficiency scores are worse than they were at its inception. It is far past time for the federal government to extricate itself from our children’s education. The curricula that is being handed down to the states by the federal government is divisive, often anti-American, and filled with overly-controversial material, while completely lacking any academic rigor to set our kids up for success.

Teachers, like Romona Bessinger in Providence, shouldn’t live in fear because they’ve pointed out the poison seeping into our kids’ education. Teachers shouldn’t be browbeat by their union leadership, superintendents, and RIDE when they speak up about the softening of the curriculum, the lack of resources for both gifted and delayed students, or the clear enabling of pathologic behavior that is celebrated by the USDOE. Teachers shouldn’t be afraid to speak up about these things, but they are bullied into complacency.

Parents, like Scott Smith in Virginia, shouldn’t expect to be dragged out of a school committee meeting forcibly, bleeding, because he wanted answers why his daughter was sexually assaulted in a school bathroom. Parents shouldn’t have to become warriors to protect their children’s innocence because the place they send their kids to for seven hours a day is indoctrinating them with malicious ideology promulgated by the USDOE. Parents shouldn’t be afraid to speak up at school committee meetings without being labeled a domestic terrorist by the federal government, but they’re bullied into silence.

Cities and towns should have the right to decide what type of curriculum is right for their community. Parents and educators together, without the interference of the USDOE, and with reverse-regulated support from RIDE, should control what their children are learning in school.

Teachers want to teach. Parents want their kids to learn. Children must be protected from ideological indoctrination in lieu of learning. Our system, both federally and at the state level, is failing our children and has been for the past 43 years.

It is time to take a local and independent approach to our curriculum development. It is time to take back our children’s education and support our students, teachers, and parents. It is time to end the US Department of Education.

Jonathan Hamilton Martin, M.D., is a candidate for Rhode Island House Dist. 24. 

Value the news you get here on East Greenwich News? As a 501-c3, we depend on reader support. Become a sustaining (monthly) donor or make a one-time donation! Click on the Donate button below or send a check to EG News, 18 Prospect St., East Greenwich, RI 02818. Thanks.

2 Comments

  1. Eugene Quinn

    Proficiency scores are widely misunderstood. Current “proficiency” measurements are done in a way that guarantees that at most about half of students will be classified as meeting or exceeding standards. In Massachusetts they are based on a standardized test called the Massachusetts Comprehensive Assessment System (MCAS). In Rhode Island they are based on the Rhode Island Comprehensive Assessment System, which is identical to the MCAS in every aspect except one: it is not standardized against our students, but against students in another state (Massachusetts, of course). What do we mean by standardized? A thorny question in assessment is how to compare the results on two tests (say, successive administrations of RICAS) if they don’t have the same questions. For the MA and RI tests, this is done with Item Response Theory (IRT) https://en.wikipedia.org/wiki/Item_response_theory. Typically there will be two sets of questions on the test given in the first year: questions that will count towards the score for that year, and questions that are being field-tested for the next year’s test (‘matrix items’). The contractor (Cognia) uses the field test data to estimate two parameters, difficulty and discrimination, for each test item (see Appendix F of the 2021 RICAS Technical Report https://www.ride.ri.gov/Portals/0/Uploads/Documents/Instruction-and-Assessment-World-Class-Standards/Assessment/RICAS/2021_RICAS_Technical_Report.pdf?ver=2022-08-16-152613-850 ). “Proficiency” is defined using a parameter (‘theta’), that each student is assumed to have a characteristic value of. These values are assumed to have a standard bell-curve distribution in the tested population. We do not observe theta directly, we only see its effect on responses to the test questions. The difficulty and discrimination parameters together with a student’s theta value determine the probability that that student will correctly answer each item (Section 6.2.1 of the 2021 RICAS Technical Report). The sole purpose of IRT is to turn the test responses for a student into a theta value, that is, to place them on the hypothetical population bell curve (for our purposes, we’ll ignore the absurdity of charactrizing a young person as a point on a bell curve). The estimated theta values are converted to a scaled score, another bell curve but on a different scale. Finally, three ‘cutpoints’ are determined that divide the scores into four ranges for reporting purposes: not meeting standards, partially meeting standards, meeting standards, and exceeding standards. As it turns out, the cutpoints are not exactly, but very close to, the mean of the bell curve, 1.5 standard deviations below the mean, and 1.5 standard deviations above the mean. The implications of this are that, approximately, 6.6% of the tested population will be declared to be not meeting standards, 43% will be declared to be partially meeting standards, 43% will be declared to be meeting standards, and 6.6% will be said to be exceeding standards – ALWAYS. What few people realize is that this guarantees that only about half of the tested population will be declared to be meeting or exceeding standards, regardless of how good (or bad) the scores are overall. So when a candidate says that 20% of Rhode Island students ‘met grade level standards’ in math, what the data really says is that a Rhode Island student who took a test designed specifically around the Massachusetts curriculum standards and normed using the Massachusetts student population has a 20% chance of scoring above the median (while a Massachusetts student hypothetically has a 50% chance). In my opinion, this makes Rhode Island students sound worse than they are. In 2021 the average scaled scores in math were 481 (statewide) for Rhode Island and 489.7 (grades 3-8) for Massachusetts, on a scale that runs from 440 to 560. Of course if you are going to propose that every community makes up its own curriculum, it will be impossible to do any meaningful assessment of proficiency because it will be impossible to construct a test that can be given to all students.

    Reply
  2. Eugene Quinn

    As it turns out, NEAP uses the same standard setting method as RICAS (the ‘modified Angoff method’). Here is some history on the development of standards for NEAP, which is relevant: haminstitute.org/national/commentary/what-do-you-mean-proficient-saga-naep-achievement-levels

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

RELATED STORIES

Newsletter Sign Up

* indicates required

Archives

Latest Streaming