Header text

EssayTagger is a web-based tool to help teachers grade essays faster.
But it is not an auto-grader.

This blog will cover EssayTagger's latest feature updates as well as musings on
education, policy, innovation, and preserving teachers' sanity.
Showing posts with label rubrics. Show all posts
Showing posts with label rubrics. Show all posts

Tuesday, October 1, 2013

K-5 Common Core standards in-progress!

EssayTagger's free Common Core Rubric Creation Tool has been very well received by teachers. But I initially only adapted the 6-12 standards. I'm finally gearing up for K-5!


Our free Common Core Rubric Creation Tool is quite popular. It's been used to create over 7,500 Common Core-aligned rubrics in just its first year! And easily half of our customer support emails are from people who want us to incorporate the K-5 standards.

Well, I hear you and I am working on it!! Check out the work-in-progress.


Want to help? 
I'd love some collaborators! This is difficult! Use the support widget on the website, respond in the comments below, or find me on Twitter (@KeithMukai) if you want to contribute!


Some background
For those of you that don't know, the innovative aspect of the tool is that it breaks down each standard into its assessable sub-components:




This solves the problem that teachers face when they look at the standards; the dang things are just too vague, cover too much ground, or just aren't assessable.

It was also a crap-ton of work for me! Those assessable sub-components aren't part of the official CCSS specification; I had to stare at each standard and find a concise way to translate the standard into its assessable sub-components. That's not easy. And I'm not necessarily going to get everything right.

So I also made the tool flexible so that if you don't like my terminology or the way I've done it, you can edit the labels or even add totally new subcomponents as you see fit.



You then end up with a rubric grid that you can further customize, add additional CCSS-aligned rubric elements, add non-CCSS-aligned rubric elements (e.g. "Class Citizenship" or anything else not captured by the CCSS). You can also add performance descriptors (the traditional rubric text we're used to seeing) in each grid cell. You can share your rubric online (post a link, Twitter, email, etc), print it, or download it as an Excel CSV file.




And, of course, since this tool is part of EssayTagger, you can apply your rubric to an EssayTagger assignment and produce Common Core-aligned results data as you grade!



The Common Core Rubric Creation Tool is free for anyone to try and there's no registration or sign in required.

That being said, I certainly would not be disappointed if you decided to go deeper into the EssayTagger world and create a free trial account in order to see what life is like when you grade essays in our system. And as I always have to point out: EssayTagger is NOT an auto-grader. You make all the evaluations, you provide all the feedback. We just make it easier and more efficient for you to do so!

Thursday, March 14, 2013

Adapting traditional rubrics for EssayTagger: Nevada Opinion Writing Rubric (5th grade)

EssayTagger represents an evolution of the concept of a rubric. Here's a specific look at how I adapted an existing rubric to take advantage of the EssayTagger world.


If you're new to the EssayTagger world, here's a primer on how EssayTagger rubrics are different from traditional rubrics.

Tearra Bobula, a teacher at Mark Twain Elementary in Carson City, NV, asked me to adapt the Nevada Opinion Writing rubric. It initially presents a bit of a challenge. It consists of five main sections that each contain a subset of 2-4 additional elements:

(click for larger view)

Let's take a closer look at the first section:

(click for larger view)

Each row of this section pertains to the Statement of Purpose/Focus, but assesses slightly different aspects of that overall area. I would break these four sub-elements down to something like:
  • Statement of Opinion
  • Focus
  • Maintain Purpose/Focus
  • Provides Context
So when I adapted this rubric I treated each sub-element as its own rubric element:

Friday, February 1, 2013

Using EssayTagger to coordinate PLT assessments, pt2

Part 1: PLTs must have common assignments and common assessments
Part 2: How to coordinate PLTs with EssayTagger
Part 3: Analyzing the data reports (coming soon)

In part two we show you a simple way to increase PLT coordination while maintaining each teacher's individual voice and personal flair.


Let's assume you're onboard with the idea that PLTs need to have a few common assignments that have common assessments in order to gauge the PLT's progress and effectiveness (if not, check out part 1).

Now how do we do this? I closed part 1 by sharing how much I hate common assessments because they are never in my voice and seem like an alien or foreign presence in my classroom. Education reformers would be wise to note that jarring students out of the environment they're used to isn't the best way to assess the effectiveness of that environment!

Producing uniform PLT assessment data seems incompatible with preserving the unique flair and character of each teacher's classroom.

EssayTagger provides a way around that conundrum.


Shared rubrics
Rubrics are at the heart of how teachers assess written work in EssayTagger. And they are EssayTagger's secret weapon to solving the problem at hand.

Have your PLT agree upon a shared assignment. Let's say all of the Sophomore English teachers will be teaching "The Tempest". We can agree upon a few key goals for our Tempest unit and develop a summative essay assignment for the end of the unit.


Collaborate on the rubric
Now have one teacher log into her EssayTagger account (or jump to our free Common Core Rubric Creation Tool) while the PLT discusses what they'd like to see in the rubric for this shared assignment. Consider the PLT's goals for the unit and begin building the rubric in EssayTagger. Again, we only need one transcriber to create the rubric.

Using EssayTagger to coordinate PLT assessments, pt1

It's becoming more and more important to coordinate curriculum and assessment within PLT teacher teams. In part one we'll briefly discuss PLTs, motivate why coordination is so important, and discuss some of the challenges. Part two will discuss how to use EssayTagger to enhance that PLT coordination without stifling teachers' individual voices and strengths. Part three will look at how the resulting data can help each individual teacher and the PLT as a whole.



Part 1: PLTs must have common assignments and common assessments
Part 3: Analyzing the data reports (coming soon)


PLTs are in
Most schools seem to be moving toward the PLT--Professional Learning Team--model where, for example, all of the Sophomore English teachers would meet regularly, plan team goals, share resources and exercises, and hopefully develop a few common assignments and assessments.

However, I've been in schools that still operated with each teacher as his or her own island. In this sort of environment the PLT concept will likely be met with significant resistance. There will always be the I've-been-doing-it-my-way-for-35-years holdouts but even the most progressive-thinking teachers will worry about the constricting nature of making their classes more uniform and perhaps less unique.

On the flip side, I've been in schools that had weak or ineffective PLTs, despite significant administrator emphasis on them. Simply meeting every other week is not enough. We would talk about what each of us were doing, but there'd be no central focus or plan. It has to be more than just check-in-and-share time.

Sadly, teacher prep programs aren't taking a lead on this. I'm disappointed that my M.Ed. program didn't train us to collaborate with our peers. PLTs weren't even mentioned once during my two year program. We're supposed to be the new guard, the fresh blood bringing a modern approach to education. But too many Schools of Education are themselves stuck in old-guard or outdated modes of thinking and practices.

So I feel like I have a pretty strong grasp of many of the challenges and pitfalls when it comes to PLTs. And it's no surprise that transitioning to a team approach can often be a difficult process when a culture of collaboration or direct experience with PLTs is lacking. But as you'll see in part two, there is hope. Incremental change and increased coordination is possible and can be facilitated by some 21st-century technology.


Coordination is king
A PLT has to have a set of common goals for their class sections. If a PLT doesn't have a common vision for student outcomes, you don't really have a PLT; you just have a bunch of individual teachers sitting in the same room. Common goals matter. My Sophomore English students have to be just as prepared to enter their Junior English class as the students from any other Sophomore English section. And the Junior English teachers should have a reliable set of expectations for what they'll get from their incoming juniors each year.

But just setting common goals isn't enough. We need to know if those goals are being met. Did our sophomores really get to where we wanted to get them? And how did my specific crop of sophomores do vis-à-vis the rest of the PLT's students? Did my kids see particular gains or struggles versus their peers? This isn't about outing a bad teacher or competing against my teammates. It's about being able to identify what is and is not working in my class and across all of our classes.

Monday, January 28, 2013

Latest update: Rubric descriptors now integrated into the grading app

We differentiate rubric "descriptors" that are designed to set performance expectations vs feedback comments that promote student growth. Long overdue, your rubric descriptors are now integrated into the feedback-driven grading app.


Rubrics serve two purposes
It's taken me a while to wrap my brain around this, but I finally had my "a-ha!" moment and clearly saw that rubrics serve (at least) two distinct purposes:
Purpose #1: Rubrics set performance expectations for students before they attempt the assignment. 
Purpose #2: Rubrics provide performance feedback after their work is assessed and scored.
A typical rubric grid cell for, say, Evidence will go something like, "Uses inadequate examples, evidence, or reasoning to support its position." This sort of vague language always frustrated me because I only cared about Purpose #2 (rubrics as feedback). In fact, this was a large part of the motivation for me to create EssayTagger in the first place. I wanted to be able to give students more specific feedback at a per-sentence level. I wanted to be able to coach them on every individual piece of evidence rather than offering a single generic statement.

And I tended to poo-poo Purpose #1 because I set expectations in class by doing a ton of group and peer review where everyone evaluated samples and compared notes against my evaluations. It was amazing to see how close the class peer review averages were to my own determinations on the essay samples. At that point it didn't seem necessary to re-establish those expectations in a formal rubric.

So I built EssayTagger with only Purpose #2 in mind.


Enter "descriptors"
But many teachers told me that they believe strongly in Purpose #1 (using a rubric to set expectations). I try my best to avoid letting my personal biases get in the way and prevent other teachers from being able to incorporate EssayTagger into their classrooms.

So I developed the "descriptor" feature in EssayTagger to support Purpose #1. Descriptors set expectations. Enter them into your rubric and share it or print it out for your students. They can review the rubric and the descriptor text before they write the assignment.

Here's an example:
click to view full size

As you can see, this EssayTagger rubric looks like a traditional rubric with high-level expectation-setting descriptors.

However, because descriptors usually make for horrible feedback comments (failing to serve Purpose #2), they were kept separate from the targeted feedback comments that are the real bread-and-butter of the EssayTagger system.

Because of this separation--Purpose #1 vs Purpose #2-- I did not even display the descriptors in the grading app. I wanted to include them but I wasn't sure how to do it without creating confusion between descriptors and feedback comments.


Descriptors now integrated into grading app
A recent email exchange with Stephanie Bester of Thurgood Marshall Middle School finally prompted my second "a-ha!" moment and I finally figured out how to display the descriptors in the grading app in a way that would minimize confusion.

Wednesday, January 23, 2013

Latest update: Six-level rubric support

Thanks to teacher feedback, EssayTagger rubrics can now have up to six possible quality levels.

I had previously limited rubrics to a max of five quality levels mostly due to practical constraints; there just wasn't enough left-to-right space in the grading app to comfortably accomodate six quality levels. But after a series of recent cosmetic updates, the grading app now has plenty of breathing room.



Then: Law of diminishing returns
But I was still skeptical. I knew that six-level rubrics were popular, but I never used six-level rubrics in my classroom. For me, anything beyond five levels started to get overwhelming. How could I possibly remain consistent in evaluating ever-finer levels of distinction?

Sunday, December 2, 2012

Latest Update: Rubrics can now be downloaded as Excel files!

We're doing everything we can to encourage more teacher collaboration within teams and across the entire web. One of the main ways we do this is through rubric sharing.

Instructors can already create their own rubrics; share them via email, Twitter, facebook, or hyperlink; print them (Macs can save the printable version to PDF); import any EssayTagger rubric into their own accounts; and edit those rubrics however they please.

Now you can also download any EssayTagger rubric as an Excel CSV file.

The CSV file format is very common and is supported by most spreadsheet programs (Excel, Google Drive spreadsheet, etc.).

Friday, November 30, 2012

Latest Update: Downloadable results data!

As part of our push for new and improved data reporting, you can now download all of your results/grading data for each of your assignments. This feature is fully reverse-compatible with all existing assignments.

We don't believe in vendor lock-in so we're happy and excited to offer yet another way for you to access your results data. It's your data; it shouldn't be trapped on our servers.


What's in the download?
All of the data in the chart shown below will be included in the data download as well as a few extra fields. Here's the full list:

Using EssayTagger to level expectations within teacher teams

Teacher teams should have a common vision for what "success" means for their students. EssayTagger collects and analyzes a ton of data which can be used to create consistent expectations across the teacher team.


Whenever you grade an assignment in EssayTagger you end up with an assortment of data reports that provide a deeper insight into how your students performed, based on your evaluations.

That's all well and good, but what is the relevance to teachers operating in a team-based approach? What does the rest of the Sophomore English team care about the results from my two Soph Eng sections?


At a minimum, compare results and discuss
Maybe I find that my sections are doing reasonably well on Thesis but are still developing their skills with Counterclaims. Are the other Soph Eng teachers seeing the same thing with their students?

If so, we can talk about strategies to improve their work with addressing the opposing viewpoint.

Or perhaps we'll find that my Thesis results look stronger than the other teachers' results. Now things get interesting. Am I doing something awesome that's really working with my kids or am I just grading their theses too generously?

Thursday, November 29, 2012

Latest Update: New data reports!

With EssayTagger's core platform in place, it's time to turn our attention to the incredibly rich data that is generated when you grade your essays in our system.


UPDATE 11/3:
We've already updated the charts quite a bit and have updated this post to reflect the changes!

UPDATE 11/29:
Even more improvements and two new charts! Post updated again.

UPDATE 11/30:
You can now download your grading data to Excel!


We've reached the first milestone of our major push to enhance and extend the data reporting features of the site. Today's release opens the first new data reports on a beta test basis. "Beta" in programmer lingo means it's not yet finalized, but is mostly where it needs to be. There will likely be further refinements based on instructors' feedback as well as minor bugs to be fixed.


Quick highlights
  • "Section snapshot" overall section-wide aggregate performance graph
  • "Section details" chart of all students' performance on each rubric element
  • "Individual details" in-depth view of a particular student's performance on the assignment
  • Statistically-significant outlier identification to help you focus on the students who are furthest from the pack.

All of these data reports are amazingly useful tools for teachers, but I'm particularly excited about the statistical analysis we're able to provide. You don't have to know the first thing about stats, standard deviation, or z-values; we're computing everything for you and flagging the kids that need your attention the most!

You grade, we crunch the numbers. How awesome is that?!

(see the demo video here: http://youtu.be/WZsEoAJEkv0)


"Section snapshot" overall results
This is the new default view; you'll be routed here automatically when you click "exit grading app" when you're done grading. It's the broadest view of the data and includes two charts. The goal is to provide a rough "snapshot" look at how your class section performed as a whole on the essays graded thus far:



The stacked column graph displays how many of your students fell into which quality levels when you evaluated their essays in the grading app.

Put simply: the more green, the better.

Monday, November 26, 2012

Latest Update: Common Core progression-tracking!

Grades and GPAs are just rough estimates. It's more important to keep track of which skills your students have mastered. That's where Common Core-aligned progression tracking comes in. It's a big deal.

Most of us have had this skeptical suspicion: Am I aligning my curriculum to Common Core just because some bureaucrats said I have to?

If that were all that was behind Common Core, then it would absolutely be a waste of our time.

But curriculum alignment is just stage 1. Here's the full picture:

  1. Align curriculum to Common Core
  2. Assess within Common Core
  3. Report and track student data within Common Core
  4. Develop and share remediation strategies tied to Common Core
Only at this high-level view does it all start to come together. The overarching goal is to enable apples-to-apples comparisons that can then be used to drive stage 4 where every teacher in America is creating interchangeable exercises and materials.

Let's be more concrete. EssayTagger is focused on stages 1-3, culminating in our Common Core-aligned progression tracking that was just released:

Friday, November 16, 2012

Latest Update: Language and Speaking & Listening Common Core standards added!

Based on teachers' feedback, I've added the Language and Speaking & Listening standards to our Common Core Rubric Creation Tool.

The Language standards are necessarily quite mechanical (e.g. L.9-10.2a is the semicolon) and as such are often more suited to drill-and-skill type exercises and assessments, though certainly an instructor could construct an essay rubric that included a few specific mechanical elements.



The Speaking & Listening standards are, not surprisingly, even further afield from EssayTagger's emphasis on essay assessment. However, the Common Core Rubric Creation Tool isn't limited to just EssayTagger use and teachers did request that this standard be included. And, if you're a little creative, there actually are ways to make the Speaking & Listening standards work within EssayTagger (e.g. evaluating students' self-assessments after delivering a speech).



Both of these Common Core standards were added to the tool as a direct result of teacher requests. I had intentionally passed them up when I originally released the tool.

We are incredibly receptive to instructor feedback so keep the comments coming!

Wednesday, September 26, 2012

Announcing: Free Common Core Rubric Creation Tool!

We're super-proud to announce the release of our new tool that helps teachers create Common Core-aligned rubrics! Open to the public, totally free.

EssayTagger's Common Core Rubric Creation Tool


You are ahead of the curve and are working hard to align your curriculum to Common Core. But assessing and tracking your students' progress within Common Core is difficult -- and nigh impossible to do for essays.

I spent the whole dang summer wrestling with the standards, trying to figure out how to incorporate them into real-world, practical writing rubrics.

My initial approach was to try to coax the actual text of the standards into a more rubric-friendly format. But teachers shouldn't have to waste their time adapting the W.8.1a text just to be able to include "Thesis" on their rubrics.

Instead just evaluate "Thesis" like you normally would but add, "Oh, and by the way, 'Thesis' is part of W.8.1a." This is where the tool comes in to help you.

Wednesday, July 4, 2012

Making Common Core work, pt2: The big picture

Teachers and administrators need to understand the big picture of where Common Core is headed. Here's your quick preview.

The long view
At a surface level the Common Core standards specify what students should know or be able to do. We're focused on how to integrate that into our classrooms. That part is straightforward and obvious.

But the big picture is much bigger than this. 

Establishing a common set of target skills is just step one. The Common Core standards are not a goal unto themselves but merely a means to an end. The real goals lay beyond. One of the major ones, not surprisingly, is all about data.

Knowing a student's GPA doesn't convey enough information. Knowing that she got a B- in Sophomore English isn't enough. But knowing that she's struggling with W.9-10.1d is useful.

The standards create a common reference point for learning targets that are otherwise ad hoc, disorganized, or nonexistent. Forget leaving notes to next year's teachers that "Johnny is weak on fractions" or that "Sarah struggles with citations." That world is coming to an end. Too much information is lost that way, too much time is wasted on reassessing students' abilities at the beginning of each year.

Instead teachers will have standardized reporting tools that use the Common Core framework to track a student's entire educational record on a skill-by-skill progression level. 

Common Core isn't just about what to teach, it's about tracking what has been learned.

Friday, June 29, 2012

Making Common Core work, pt1: Why it's awkward

Forget "aligning" with Common Core; how the heck do you even begin to use Common Core?!

This multi-part series will explore some possibilities for making Common Core relevant and actually useful in real-world classrooms.

I've been engaged in a number of great discussions lately about how best to incorporate the Common Core English/Language Arts (CC ELA) standards into the classroom. My vision for how to work with these standards is evolving quickly and I wanted to share my thoughts to stimulate further discussions.

And very soon I will be implementing some form of Common Core integration with EssayTagger. I'd rather have the idea be well-thrashed out before I build a half-baked solution.

But first we have to understand the Common Core ELA beast for what it is.


Basic tensions
Common Core is inevitable. It'll be on us faster than any of us are ready for and we best get prepared ASAP. Gripe and moan and cry all you want, it ain't gonna change a thing.

Worse: The language of the Common Core standards is not classroom-friendly or, more accurately, it is not student-friendly.

Worse(er) (hee hee! Relax!): The Common Core standards are not directly compatible with how we classroom teachers work with our students and provide feedback.

This all being said, the Common Core ELA standards are not bad. They are actually quite reasonable. They're just not a great fit; the administrators' standards-based data-tracking world does not align smoothly with classroom reality. Shocker.


Common Core - A closer look
Let's stop talking and dive in.

Monday, March 19, 2012

New Rubric: Common Core Explanatory / Informative Writing (9-10) rubric

The first of many rubrics distilled from the Common Core State Standards.

Update 9/21/12:
In the six months since this post was originally published, my view of how to integrate with Common Core has evolved a considerable amount. This post is now old news. I've built a free, publicly-accessible tool to help teachers create their own customized Common Core-aligned rubrics. It's going to make life SO much easier for all of us!

Read about this new approach or jump straight to the EssayTagger Common Core Rubric Creation Tool

Check it out and let me know what you think!


Original Post:
The Common Core State Standards. Oof.

You've heard all the talk. You suspect they might get in your way and make your life a living hell. Just thinking about them makes you want to curl up on the couch in the fetal position and take a nap (my default reaction to moderately stressful things).

I'm not here to sell you on its merits or argue that there is a lack thereof. I'm here to make your life a little bit easier when you find yourself held accountable to the Common Core standards when teaching writing.

Friday, March 2, 2012

New Rubric: "They Say, I Say"

One of our first demo rubrics is now available for anyone to use in their own EssayTagger assignments!

Gerald Graff was one of my professors at the University of Illinois at Chicago during my M.Ed. program. And, I'll be honest, I was very wary of "They Say, I Say" when he first explained the concept of the book to my class. But TSIS quickly won me over. And the skyrocketing sales that he and his wife/co-writer have enjoyed certainly show that others appreciate its value as well.

But there was one thing I noticed -- the book does not address assessment. I love the guidance it offers for teaching composition and the structure it gives to developing writers, but I felt like there was a missing final chapter on how to evaluate the resulting TSIS-style essays.

So I began developing a TSIS-style rubric that would work within the EssayTagger system. I met with Prof. Graff to show him an early draft and his eyes lit up with enthusiasm.

Now that I've completed EssayTagger's rubric sharing and import features, I can post the rubric for anyone to use:

EssayTagger "They Say, I Say" rubric:

This rubric is listed as a "work-in-progress" because, well, it is. But it's a pretty dang good start. And keep in mind that any rubric shared on EssayTagger is meant to be a starting point. Teachers should alter and customize these rubrics however they see fit.

Let me know what you think!

New Rubric: Four-Strand/Four-Level

From what I'm told, the Four-Strand/Four-Level rubric is fairly common in Washington schools. I've adapted it for use in EssayTagger (you can import it straight into your own assignments!) but you'll still want to customize it to suit your needs.

EssayTagger version of the Four-Strand/Four-Level rubric:

And as I've said previously, because rubrics are so macroscopic, they inevitably undergo some changes when they are adapted for the much more fine-grained world of EssayTagger.

Thursday, March 1, 2012

Changing how we think about rubrics

Traditional rubrics are too general and macroscopic to help students. The future is specificity. And it's here.

Sharing rubrics is a simple, but important, way for teachers to collaborate.

Unfortunately, traditional rubrics -- by their nature -- can only address general, overall trends in a paper: "Some evidence was insufficient." That's fine for a quick, high-level diagnostic, but it's not very helpful for the student.

My goal when I'm grading papers is to coach the students so they can learn from their mistakes and do better next time. Traditional rubrics are good for setting expectations before the attempt, but once the essays are graded, they're really just an assessment tool. They're not a learning tool.

In order to improve, students need more fine-grained feedback: Which specific piece of evidence was weak? Why wasn't it compelling?

Traditional rubrics simply can't address these questions (nor, to be fair, were they meant to). Traditional rubrics are macroscopic. But students need the microscopic.