Header text

EssayTagger is a web-based tool to help teachers grade essays faster.
But it is not an auto-grader.

This blog will cover EssayTagger's latest feature updates as well as musings on
education, policy, innovation, and preserving teachers' sanity.

Wednesday, January 30, 2013

Grading with EssayTagger on your iPad via Photon browser

Today we discovered that the EssayTagger grading app can run on an iPad with a little help! Here are your step-by-step instructions for accessing EssayTagger through the Photon browser.


Install Photon on your iPad
From your iPad, go to the app store and search for "Photon browser". Make sure you select the iPad version and not the cheaper iPhone version. When I installed it today it was a $4.99 purchase.



Launch Photon
Photon is a web browser and really isn't all that different from any other web browser. There is one big special function you'll need to know about to enable Flash support, but Photon already does a good job of explaining it. You'll see.

Go to EssayTagger.com and either launch the interactive demo from the "try the demo" tab or log in to your account and click "start grading" to launch the grading app for one of your assignments.

When the grading app window opens you'll see a Flash error message instead of the grading app. But Photon explains what to do:



Once you click the lightning bolt icon at the top right, the grading app will be able to load:


Yay! The grading app is running on the iPad!


Adjust Photon settings
Now click the gear icon at the top right to enter the settings options. Change the following two settings:

Bandwidth: 6 - to maximize responsiveness and text quality. Photon runs the grading app remotely on its own server and streams it to your iPad. That's how they're able to support Flash on the iPad--because it's actually running Flash somewhere else! If the bandwidth setting is too low, the essay text will look chunky due to the transmission compression.

Mode: Web - I found this produced the best looking text and further minimized compression chunkiness.



That's all we need to do here. Photon will remember these settings so you won't have to worry about these again.


Using the grading app
A couple tips to make your life easier. Photon's default interactive mode is not a great match for the grading app's drag-and-drop interface. Once the grading app loads, switch to the "grab" mode at the top right:



With this mode enabled, dragging and dropping works extremely well. You can also easily select text passages to mark for errors or enter a free comment.

I would also take advantage of the grading app's built in font size adjuster buttons on the left. Increasing the essay font size will make it easier to read and easier to select text. Here's a set of before and after screenshots:




Use a bluetooth keyboard
The iPad on-screen keyboard does not work well when using the grading app in Photon. It often ends up covering up the text box so you can't see what you're typing:

Add a new comment...

D'oh! Covered up the text box!

My bluetooth keyboard worked just fine. The only slight oddity was that I had to press the keyboard button at the top right of Photon to get it to start receiving my typing.



Other minor tips
Horizontal alignment: The grading app makes the best use of space if you launch it while the iPad is oriented horizontally.

Essay scrolling: Scrolling through an essay is a little difficult because the vertical scrollbar in the grading app is so narrow. It will click-and-drag just fine but you'll have to be a bit precise to "grab" it.


It might be easier to just tap the gray area above or below the scroll bar to move the essay.


Quick tips: It's also a little hard to access the "Quick tips" rollout help topics at the top of the grading app. Switch to the mouse pointer icon mode to make it easy to "hover" over the Quick tips help items:




That's it for now! Go grade and feel all super-high tech!

Yes!! EssayTagger on your iPad!

EssayTagger's patent pending interactive grading app is built in Flash and works great in any web browser. However, Steve Jobs decided years ago that Apple would not to support Flash on iOS devices (iPad, iPhone). I've experimented with a special version of our grading app that can work as an installed iPad app, but it's a long way from being ready to put in the iTunes app store. I figured iPad support would just have to wait.

But then Alaina Langdahl of Parkrose High School in Portland, OR suggested we take a look at a few iPad web browser apps that serve as an alternative browser to the built-in Safari browser and, most importantly, they support Flash!

I'm ecstatic to announce that the Photon browser does a surprisingly good job of bringing the EssayTagger grading app to life on an iPad!


Holy awesomeness!!!

Drag-and-drop interactive grading on a tablet! This is exactly how I imagined using EssayTagger when I first started this company!

All grading app features are fully supported when used through the Photon browser. There are some important settings that will vastly improve the experience. I'll update this post soon with step-by-step instructions.


The downside
Photon browser is a $4.99 app purchase. I know, that stinks.

We have no relationship with Photon and will be evaluating other Flash-enabled iPad browsers. Hopefully a free option will emerge that offers support for all of the features required to run the grading app. Ideally Apple would finally come around and support Flash, but that's not very likely (in fact, even Android is moving away from Flash with its latest Android Jelly Bean 4.2 OS).

Until we identify a viable free solution, it'll be up to you to decide if grading on your iPad is worth $4.99.


Updates:
Step-by-step instructions for using EssayTagger with the Photon browser are now posted!

- Alaina is reporting that she's having success with the free Puffin iPad browser. My testing with Puffin was less successful but I'll take a closer look at it as soon as I can.

Monday, January 28, 2013

Latest update: Rubric descriptors now integrated into the grading app

We differentiate rubric "descriptors" that are designed to set performance expectations vs feedback comments that promote student growth. Long overdue, your rubric descriptors are now integrated into the feedback-driven grading app.


Rubrics serve two purposes
It's taken me a while to wrap my brain around this, but I finally had my "a-ha!" moment and clearly saw that rubrics serve (at least) two distinct purposes:
Purpose #1: Rubrics set performance expectations for students before they attempt the assignment. 
Purpose #2: Rubrics provide performance feedback after their work is assessed and scored.
A typical rubric grid cell for, say, Evidence will go something like, "Uses inadequate examples, evidence, or reasoning to support its position." This sort of vague language always frustrated me because I only cared about Purpose #2 (rubrics as feedback). In fact, this was a large part of the motivation for me to create EssayTagger in the first place. I wanted to be able to give students more specific feedback at a per-sentence level. I wanted to be able to coach them on every individual piece of evidence rather than offering a single generic statement.

And I tended to poo-poo Purpose #1 because I set expectations in class by doing a ton of group and peer review where everyone evaluated samples and compared notes against my evaluations. It was amazing to see how close the class peer review averages were to my own determinations on the essay samples. At that point it didn't seem necessary to re-establish those expectations in a formal rubric.

So I built EssayTagger with only Purpose #2 in mind.


Enter "descriptors"
But many teachers told me that they believe strongly in Purpose #1 (using a rubric to set expectations). I try my best to avoid letting my personal biases get in the way and prevent other teachers from being able to incorporate EssayTagger into their classrooms.

So I developed the "descriptor" feature in EssayTagger to support Purpose #1. Descriptors set expectations. Enter them into your rubric and share it or print it out for your students. They can review the rubric and the descriptor text before they write the assignment.

Here's an example:
click to view full size

As you can see, this EssayTagger rubric looks like a traditional rubric with high-level expectation-setting descriptors.

However, because descriptors usually make for horrible feedback comments (failing to serve Purpose #2), they were kept separate from the targeted feedback comments that are the real bread-and-butter of the EssayTagger system.

Because of this separation--Purpose #1 vs Purpose #2-- I did not even display the descriptors in the grading app. I wanted to include them but I wasn't sure how to do it without creating confusion between descriptors and feedback comments.


Descriptors now integrated into grading app
A recent email exchange with Stephanie Bester of Thurgood Marshall Middle School finally prompted my second "a-ha!" moment and I finally figured out how to display the descriptors in the grading app in a way that would minimize confusion.

Wednesday, January 23, 2013

Latest update: Six-level rubric support

Thanks to teacher feedback, EssayTagger rubrics can now have up to six possible quality levels.

I had previously limited rubrics to a max of five quality levels mostly due to practical constraints; there just wasn't enough left-to-right space in the grading app to comfortably accomodate six quality levels. But after a series of recent cosmetic updates, the grading app now has plenty of breathing room.



Then: Law of diminishing returns
But I was still skeptical. I knew that six-level rubrics were popular, but I never used six-level rubrics in my classroom. For me, anything beyond five levels started to get overwhelming. How could I possibly remain consistent in evaluating ever-finer levels of distinction?

Wednesday, January 16, 2013

Using EssayTagger for fast formative assessment, pt2

In part two we explore a method for fast, effective formative assessment by leveraging EssayTagger's strengths and incredible built-in data reporting.



Part two: Fast, effective formative assessment with EssayTagger


"If students receive feedback often and regularly, it enables better monitoring and self‐regulation of progress by students."
- Nicol and Macfarlane-Dick

At face value EssayTagger's core function--grading essays more efficiently--seems more well-suited to end-of-unit essay evaluation (summative assessments). But as you'll see we can easily leverage EssayTagger's strengths to hit all three formative assessment keys discussed in part one: speed, detailed diagnostics, and quality feedback.


Basic approach
Develop open-ended, journal-style written response questions aligned with unit goals and then evaluate students' work in EssayTagger, focusing on short, targeted feedback. Then review the evaluation results data to refine class-wide instruction and target individual reinforcement or remediation. Ideally you would repeat 2-3 times throughout the unit before the end-of-unit summative assessment.


A concrete example: The Tempest, Sophomore English
When studying Shakespeare with sophomores we need to work on the mechanical skill of processing the complex text and would like to see the students develop an engagement with the text at an emotional, human level. A final summative assessment might come in the form of an essay prompt like, "Do Prospero's ends justify his means?"which would require a detailed understanding of the text and characters along with an expectation of referencing appropriate textual evidence.

Using EssayTagger for fast formative assessment, pt1

In part one we'll quickly review what formative assessment is and some of its key characteristics. Then we'll learn how to use EssayTagger for fast, effective formative assessment.

"The giving of marks and the grading function are overemphasized, while the giving of useful advice and the learning function are underemphasized."  
- Black, Paul, and Wiliam

Buzzword primer
I often get lost in the absurd world of edu-speak lingo. So before we even start, let's define our two key terms:

Formative assessment is an approach where the teacher "build[s] in many opportunities to assess how students are learning and then use[s] this information to make beneficial changes in instruction" (Boston). Formative assessments happen during a unit, within the flow of instruction. It's about quickly diagnosing problems and adjusting what you're doing tomorrow to produce better results before the unit ends.

Summative assessment "generally takes place after a period of instruction and requires making a judgment about the learning that has occurred (e.g., by grading or scoring a test or paper)" (Boston). You could also call this "Final assessment"--it's looking to measure the end result of instruction.

The two can be boiled down to: "where are we struggling?" (formative assessment) vs. "how did we do?" (summative assessment). Or, if you prefer a more colorful analogy: "what's the patient's temperature" vs "how many patients survived?"


Formative Assessment Key #1: Speed
If your goal is to modify instruction tomorrow, clearly your formative assessments need to be fast. It would be absurd for a nurse to take a patient's temperature and then have to wait a week for the results.

Formative Assessment Key #2: Detailed diagnostics
One of the key principles behind formative assessment is that it "provides information to teachers that can be used to help shape teaching" (Nicol and Macfarlane-Dick). In this sense they are diagnostic, identifying the areas where students are struggling. The more detail it can provide--exactly who is struggling in which areas--the better, but this generally slams up against the need for speed. It's very difficult to do quick formative assessments that are highly detailed and still allow the teacher to have a life.

Formative Assessment Key #3: Quality feedback
While the first two keys were teacher-centric, this one is student-centric. Part of what powers formative assessment's effectiveness is the targeted feedback provided to each individual student. It's not enough to merely see where course corrections are needed; each student must be explicitly steered in that direction.

Tuesday, January 15, 2013

System alert: Partial system outage - resolved

FINAL UPDATE 5:31pm
The last lingering effects of the Google App Engine server problems seem to have been cleared up and our own testing now shows full functionality restored.

This sort of downtime is frustrating, but it's still only the second Google App Engine outage since we launched EssayTagger 14 months ago. All sites suffer some downtime but we still believe in Google's reliability and ability to react and recover faster and more robustly than we could if we were managing our own server hardware.


UPDATE 1:23pm
Message from Google:
"We are still working to resolve the issue related to Google App Engine serving. At this point error error rates for affected applications should be declining. We will provide another status update by 11:30 AM PST."

EssayTagger is responding again, albeit slowly. Grading app is still severely impacted.


UPDATE 12:07pm
Google App Engine servers continue to see problems and it seems to have spread beyond the backend task queue. They've rescinded their earlier resolution note and are now only saying "We will update this message shortly when the incident has been resolved."

These server issues are now affecting the main EssayTagger site.


UPDATE 10:35am
Message from Google:
"This morning some Google App Engine applications reported elevated error rates and increased latency. This issue should be resolved as of 8:10 AM US/Pacific time. We apologize for the inconvenience and thank you for your patience and continued support. Please rest assured that system reliability is a top priority at Google, and we are making continuous improvements to make our systems better."

There still seems to be some intermittent slowdowns for backend processes (uploading an essay, marking an essay as graded). Though Google says the issue is resolved, I still recommend caution.


ORIGINAL POST 1/15 10:28am
At approximately 9am CST Google's App Engine servers suffered a problem with their "task queue" service which EssayTagger uses for behind-the-scenes processing. The majority of the site has not been affected.

Specific interruptions occurred on the student upload page which relies on the task queue to process incoming essay files. The other major impact was that the grading app was unable to process essays when they were marked as graded. Normal grading activity (evaluating rubric elements, adding comments, etc) should not have been affected.

The task queue service is currently intermittent. Take care if using the grading app and keep an eye on the "synced" or "syncing..." message at the top right. If it remains on "syncing..." for more than a few seconds, pause before doing any more work. Your work is only guaranteed to be saved when you see the "synced" message.

Updates will be posted as new information arises.

Monday, January 7, 2013

Latest Update: Error marking!

Thanks to input from our users we now support a dedicated feature for marking spelling, grammar, or other errors. But this new feature is more than just a red underline. Read on to learn more!


EssayTagger is built to help teachers evaluate student work within the structure of a custom rubric and provide excellent, specific feedback to students. But instructors made it clear to us that we needed direct support to be able to mark errors--the dreaded red pen markups on a paper. It makes sense; marking a grammar error is different from evaluating a weak thesis.


Error Mark overview
The new feature makes it easy to mark problematic passages as having an error. Marked passages will have a red underline. You can enter an optional comment about the error. When a student receives her graded work, she'll see the red underlines sprinkled throughout her essay.

But here's the coolest part: all of the marked passages will be collected into a list at the end to make it easy for the student to do a follow-up correction exercise.

And just to be clear: These are errors that you determine. The grading app does not do any auto-evaluation whatsoever. EssayTagger is always driven by your brain, your expertise. We do not believe in auto-grading software!!


Let's see an example!
Error marking piggy backs on our existing "free comment" system to make it super quick and easy to mark an error.

Just select the problematic text:



And when you release the mouse button the new Mark Error/Free Comment popup box will appear:



Just click "mark as error" and the text will be underlined in red. That's it!



The student will now see this error mark in the final graded output. Here's what the student sees:



You can also enter an optional comment about the error. Comments appear in the list of marked passages at the end of the graded essay, prefaced with your initials:



Notice that the whole sentence is presented so that the underlined portion appears within its full sentence context.

Pretty damn cool, right?!


A word of advice
In most cases I recommend not entering a comment about the error. Instead hold the student responsible for reviewing her errors and thinking through them herself to figure out and learn from her own mistakes. She can seek out help if she needs it, but we shouldn't take on the responsibility of making corrections for the students when it really doesn't do them any good.

I'd much rather have students submit corrections to earn back some mechanics points rather than having me write endless "subject-verb agreement" or "you're/your" comments that the students won't even read.

Think of it as a chance to put those Active Learning vs Passive Learning PD workshops to use!


Available now!
As with all EssayTagger feature updates, this is available now to all EssayTagger users. Every time we upgrade the site, all users benefit!



Additional features coming soon
We will update the data reporting screens to include data about how many errors were marked in each essay along with aggregate data (e.g. average number of errors marked on the assignment) and individual vs aggregate performance outlier analysis (i.e. does a student have statistically significantly more errors than his peers?).