The blog

Vast Literatures and the Two Paper Rule for Peer-Review

About two weeks ago, Noah Smith (@noahpinion) wrote an extensive blog post about the problem of academics referencing “vast literatures” to rebut an argument. As Smith tells it, simply admonishing another academic that he/she/they ignored extant vast literature is at best impractical and at worst pointless. In the first case, not only is it impractical to go back and read vast amounts of articles but it is also imprudent to jump into a vast literature willy-nilly. As to the latter case, well, the vast literature rebuttal may well point toward a line of research, however vast, that is as baseless as phrenology. As such, Smith properly puts the onus on the referencer of the vast literature to provide two exemplars in the literature. According to Smith:

what you need to assess quickly is not what the literature claims to find, but whether those claims are generally credible. And that is why you need to see the best examples the literature has to offer. Hence the Two Paper Rule.

I find the Two Paper Rule interesting not just as a nascent scholar but as an editorial board member for an academic journal. As such, I am an initial gatekeeper to what gets published in one of the oldest peer-reviewed journals in education. While I can’t guarantee the quality of the peer-reviews themselves, I can redouble my efforts to strengthen my link in the peer-review chain. No one wants to contribute to a controversy like what happened over at Hypatia, but following the Two Paper Rule could directly streamline and improve academic publishing and arguments between scholars.

Like any general rule there are limits and potential abuses, as noted by Krugman and Cowen, and I too have my own quibbles with the Two Paper Rule. However, I believe everyone in the peer-review chain has an obligation to use his/her/their hard-won knowledge to improve both the published article and the process. For instance, if I decide to reject a submission or ask that it be revised and resubmitted in order to account for a missing area of literature, then I need to provide, to the best of my ability, examples or authors that the submitting scholar should read. After all, if I know enough about the topic to notice the discussion is missing an entire area of research, then I am familiar with at least one prominent scholar or one well-written article in the discipline.

Anyway, that’s what I would want from feedback on a paper I submit for publication.


Dispatch from a Second-class Educator

“I’m sorry, but you can’t call yourself a real teacher.”

“What do you do for your real job?”

“Isn’t all just tricks and memorization?”

“I’m sure you know all the best bars and hotels on [the local university main drag].”

These quotes are just examples of the variety of comments I receive when revealing my profession. Each came from either a parent of a student or a colleague in a graduate school of education. Read them over again and think about that. Let it sink in.

I have nine years of experience in teaching all manner of entrance exam preparation. Tests like the SAT, ACT, GRE, GMAT, and MCAT are continually derided. Often deservedly so.

I have taught in Seattle and its wealthier suburbs. I have taught in affluent enclaves and hopelessly rural areas in North Carolina. I have taught in places as disparate as Charlottesville, VA and Orange, SC. I’ve taught at HBCUs and programs like AVID; pre-meds at Duke and veterans from NC State. I couldn’t reach each and every student, no teacher can, but I have evidence to show I significantly helped the majority of them.

But my nine year commitment to gaining profound knowledge in and around such exams, the troubles a broad swath of students encounter with understanding the tests, the effects of and ways to mitigate test anxiety, of how to coach young people who are unaccustomed to less-than-stellar achievement, and of ways to reach students who feel the deck is stacked against them. Knowledge such as mine is hard-fought through countless hours of preparation, study, and practice both for and of teaching experiences.

Am I ashamed of my work? No. Do I despise the industry and its reason for being? Yes. Am I uncomfortable with how much the company charges for my services? Beyond measure. Would any parent with the means pay that charge without reservation? Absolutely. Do I have deep, irreconcilable internal conflicts with all of the above (see what I did there…)? More than you could ever know.

Effectiveness of Online Learning

Online works at least as well as a traditional classroom when studied in adult and professional learning contexts. The efficacy in K-12 isn’t in question–it does work, students do learn–but the effectiveness has yet to be properly ascertained. These are the conclusions of a meta-analysis published by the Department of Education (Means, Toyama, Murphy, Bakia, & Jones, 2009). The report pursues implications for K-12 education, but I think these ‘findings’ are not much more than suggestions for further research. However, it is possible for K-12 educators to introduce more blended learning (or fully online) as deemed appropriate by teacher experience in a given grade level.

The reason I am hesitant to extrapolate the report’s conclusions to K-12 is based on learner capabilities. Of the one hundred or so studies that fit the criteria for meta-analysis, ninety-seven were in adult and/or professional education settings, such as military and health professional development. The motivations for these learners is surely different from those in compulsory schooling. While many school-aged kids are quite capable of learning online, others lack the desire, ability, or support to succeed in an online learning environment. Additionally, digital learning requires a slightly different skill set than does traditional classroom learning. A simple coin flip would reveal much about the benefits for ‘classroom’ management: some students will benefit from fewer social distractions, others will become distracted by other technologies (social media and the like) or household conditions. Thus, it is once again down to the old teaching maxim of ‘know thy audience’ and implement digital tools accordingly.

Lastly, since humans are social beings and much of learning has a social aspect, I am still waiting for reliable methods for fostering beneficial student interactions. As Chickering and Ehrmann (1996) prominently noted long ago, good teaching involves building cooperation and interpersonal, reciprocative learning among students and teaching with new technologies is no different. I suspect the ultimate solution to developing consistently helpful methods of socialized learning is in another principle of good practice: respect for diverse ways of learning. In short, variety of student interaction should be the order of the day.

Effective Assessment

Unsurprisingly, I’ve had assessment on the brain for some time now. While I think the case is open and shut for assessing students, I really struggle with how to assess teachers.

On the one hand, as I discussed previously, I think student produced evaluations are effective and rely on the only people who observe the teacher in action daily. On the other hand, can students ever be reliably knowledgable about their own learning? If the ultimate measure of a teacher’s skill in the classroom is how much students learn, then it appears that students as evaluators are a paradox.

So what if assessment shifts to satisfaction? At first blush it seems reasonable that a satisfied student is one who has learned. But we know that not to be the case (Cervero & Wilson, 1996; among others). Thus, I’ve met another contradiction. If a satisfied student hasn’t necessarily learned and a dissatisfied student hasn’t necessarily not learned, then how does measuring satisfaction help matters?

Lastly, the most grievous problem for assessment is full transfer of learning. While many activities can produce something that is learned–some sort of fact–but transfer of learning only occurs when the student incorporates what he or she has learned. Think of a vocabulary test. Memorizing pedantic, abstruse, esotericisms rarely leads to actually using those words in either real or academic life. In this light, a student would report learning a bunch of words, which would be assessed as effective teaching; but the new words never transfer into the student’s thoughts and behaviors, which doesn’t constitute real learning–but does it constitute poor teaching? Even if it does, is it possible to assess transfer of learning?

On Tutoring: what it is and isn’t

I quickly realized one post discussing on-demand tutoring was insufficient. Let’s look at the make-up of a tutor.

Many students want, nay demand, quick answers. After all, learning is difficult. Cognitive science informs us that the brain doesn’t make learning easy. So it is unsurprising for students to seek out quick, easy answers. But quick answers can be readily attained with skilled use of Google. When that falls short, those answers can come from classmates. But simply receiving bits of information does not constitute learning. At least not in any way that I would recognize it.

The last thing a tutor ought to be is a source of answers; someone with whom to fill out a worksheet. While such an endeavor may actually raise grades, it doesn’t have a lasting effect.

A degree better but still insufficient is the tutor who only deals in facts and explanations. Such a tutor corrects the tutee and further clarifies the student’s understanding, which is great. But that tutor does not teach metacognitive skills whereby the tutee can learn to monitor his or her own understanding. Tutoring this way does indeed improve both grades and real learning. I argue this to be the mode of the Standard American Tutor.

However, excellent tutoring is first and foremost a relationship. Expertise is certainly necessary, but so is an ability to guide the tutee through the cognitive processes of the expert. It is no wonder then that I adhere to Brown, Collins, and DuGuid’s (1989) notion of cognitive apprenticeship. As such, a tutor must not only be able to convey information and explain concepts, as is the common perception, but also lead the tutee through murky answers and demonstrate ways to handle frustration, among other difficulties. In addition to modeling metacognitive strategies, the excellent tutor nurtures the psychological well-being of the tutee. By validating the student’s concerns and suggesting practical ways to cope with those challenges, the tutor becomes a student’s closest educational advocate.

[Obviously, treading into emotional advice must come with a caveat. The student must clearly understand whether or not the tutor has any licensed experience in counseling.]


On Tutoring: online and on-demand

network clipart

recent article in the Chronicle of Higher Education noted a shift in students’ preferences toward on-demand tutoring over the traditional peer to peer model. The Chronicle article rightly raises concerns for the financial burden students face when they turn to these services. While I too have reservations about the developing pay-for-service culture in education, my chief concern is the quality of the education. Actually, that focus describes my point of departure for all educational questions, but that’s another post entirely.

Buyouts and growth.

Chegg and have both recently increased their respective influence in the online tutoring market. Chegg effectively doubled down by buying up a growing competitor in the field. (a piece of the huge IAC group which owns everything from and Tinder to The Daily Beast and Vimeo) bought The Princeton Review (TPR) as a means to access the company’s expertise, reach, and name recognition in test prep and admissions counseling. TPR had been growing its own business online through classes and tutoring prior to the purchase. However, TPR and Chegg never offered on-demand tutoring until the acquisitions were made.

Clearly, the demand for on-demand tutoring is coming from the same place all demand does: a place of need. But the question remains whether on-demand tutoring is even tutoring per se. The practice has received its share of praise, but much of the acclaim makes on-demand tutoring sound much more like a quick Q & A than anything substantive; if tutoring is sitting down to a nutritious meal, then on-demand “tutoring” is drive-thru. How much can you actually learn from a “tutor” who doesn’t know you and never will? What could be gained, in terms of sustained learning, from a session that may be no longer than 10-15 minutes? Additionally, many question the qualifications and selection process for these tutors.

Benefits and scams.

Many proponents of online education in general often cite the increased access to education the internet provides. That I do not dispute. Once again, I’m concerned with quality. And when education becomes a for-profit endeavor, I believe the level scrutiny on quality should vary directly with every dollar.

In recent years, the greatest consumers of for-profit education (more on the subject here) and for-profit on-demand tutoring have been members of the US military. built its business on the GI Bill and has profited famously. It’s no wonder then that shortly after acquiring TPR, IAC named the former president of American Public Education, Inc (APEI)–what a name!– to run the two companies. APEI, it should be noted, has a special relationship with top brass in the military. As such, the vast majority of its students are/were service members. That wouldn’t be a bad thing if the education were worth more than the megabits it is transmitted on. APEI has been embroiled in lawsuits by both its shareholders and its students for various fraud charges.

It appears the on-demand tutoring market is growing as an off-shoot of the for-profit education trend and the increasing number of international applicants/students. I find it difficult to imagine how the industry will not necessarily create an atmosphere of answer-giving and other shortcuts that border on plagiarism. When a company answers to shareholders who are beholden to the bottomline and a “tutoring” service is provided on a per minute basis in the moment of need (ostensibly to increase access and affordability), how exactly does that lead to the methodical and attentive relationship most would recognize as real tutoring? How would such a business garner repeat customers if each tutor actually took her time to fully explain concepts when the students expects a quick answer? I don’t think it can. Meanwhile, I think more service members and parents of teens will be hoodwinked into paying for poor instruction and/or quick and incomplete answers.

Online Learning: reflection on an exercise

How better to analyze online learning than to try your hand at designing an online course and taking one yourself? What follows is a reflection on the affordances and constraints of the former.

Initially, I began designing a course intended for college students preparing for a standardized entrance exam. While I felt strongly about the content and the general pedagogical approach, I couldn’t ascertain exactly how teaching the course fully or even partially online added any value to the instruction. So I scrapped the course and began fresh.

What I developed was a wholly new undertaking. I designed a hybrid course of three connected lessons for a professional development. At every turn I wanted to promote social learning. I chose Google Classroom as the platform for the course because of its intuitive, blog-like architecture. The set up easily facilitates exchanging comments on various portions of the course work. The difficulty with Google Classroom, however, is that it is fairly limited in how the designer can connect third-party applications. There are no fancy widgets or plug-ins available. Working within the confines of the LMS I could readily link websites, upload photos (not inline with posts, however), and connect any Google product. After some dedicated internet search, I found a spreadsheet with written equations that connect it to a Twitter account. This was important to the course design because I very much wanted a back channel that participants could use to revisit and extend conversations (Chapman, 2015; EDUCAUSE, 2010).

In presenting course material online, I used a preview-reflect model for each lesson (Goldenberg, 2010). The preview helps to tap existing background knowledge and experiences while the reflection aims to connect the new material with the previous knowledge, in a dialectic fashion. Pedagogically, the affordances of web tools made possible the inclusion of videos (which can introduce a topic in a way that taps prior knowledge as well as establishes baseline knowledge if a participant is unacquainted), interactive games/presentations, and collaborative documents.

Lastly, I found it immensely helpful to design lessons with an explicit alignment table that links learning objectives to assessments. Much can be said for authentic assessment (Lock & Redmond, 2015), but I believe it is paramount for assessments in online/hybrid courses to be very carefully designed. It can be tempting to include a rote assessment at the conclusion of a lesson simply because it provides rudimentary evidence of learning, but a more thoughtful and active learning activity (Chapman, 2015; Li & Protacio, 2010) should comprise assessment.

The result, I believe, is a pretty decent web portion of a hybrid course. Check it out at Class code: 7knwgp

I plan to migrate the course to a more accessible LMS, but that will take a little while.

The Empiricists Strike Back?

Two things have colluded to initiate this piece: 1) The teacher strike in Seattle (as symptom of Common Core implementation problems) 2) I just made my first acquaintance with the work of John Hattie. Visible Learning (2008) contains some compelling arguments for improving schools and evaluating teachers. For now at least, I want to keep my comments to the latter–lest I muddy the waters, brackish as they are.

Let me dispatch with the notion of testing. When economic/business minded people approach the school system, they do so with what appears to be an undying need to quantify and make accountable. This process usually starts with graduation rates and, as of late, trends toward measuring student outcomes. These well-meaning administrative progressivists have berated the American public with ideas of minimum standards and return on investment for over a century. After all, who in his or her right mind would disagree with efficiency and efficacy?

The problem necessarily lies in the measuring instruments. You can only measure what the instrument is capable of measuring. Where is the yardstick or barometer for effective teaching? If there were such a thing, why would the yardstick be lined up to measure the students? In other words, shouldn’t someone put eyes and ears on the actual instruction in order to determine its efficacy? Put this way, it is plain that testing students in order to gauge teachers is prima facie illogical.

If only we had the resources to evaluate teachers through ongoing observation, you say. Well, John Hattie believes we do. According to Hattie (along with: Irving, 2004; supported by: Bendig, 1952; Tagamori & Bishop, 1995) teacher evaluations should be executed by the students themselves. It is a myth, he argues, that students will rank their instructors on some capricious whim. If Hattie (and Irving) is right, then why is it so hard to convince the empiricists? The cynic in me thinks it is political in nature. The public wouldn’t understand; it isn’t common sense; juggernauts of testing have too much money involved. But I’m not convinced that educators have made the argument for themselves. While I haven’t researched specific cases, I have not seen many solutions to the testing debate, just troves of disgruntled teachers and students.

Hot and Fresh Out the Kitchen!

This may be the one and only contribution to education that R. Kelly (by extension) has ever made.

Here I’ll post about my thoughts and concerns regarding cognitive processes, especially how they relate to technology and literacy within the field of teaching.