Tuesday, October 27, 2009

Evaluation Methods: Each user is unique. Assess each one first, then look for patterns

On Monday, I talked about my belief, as a novice evaluator and educator, that evaluation (and teaching) should be organized around programmatic goals: describe every student should learn, study each student's progress toward those goals, and study the program activities that are most crucial (and perhaps most risky) for producing those outcomes.

After some years of experience, however, first at Evergreen and then as a program officer with the Fund for the Improvement of Postsecondary Education (FIPSE), I realized that this Uniform Impact was valuable but limited.

In fact, I now think there are two legitimate, valuable ways to think about, and evaluate, any educational program or service:
  • Uniform Impact: Pay attention to the same learning goal(s) for each and every student (or, if you're evaluating a faculty support program, pay attention whether all the faculty are making progress in a direction chosen by the program leaders).
  • Unique Uses: Pay attention to the most important positive and negative outcomes for each user of the program, no matter what those outcomes are.
You can see both perspectives in action in many courses. For example, if an instructor gives three papers an “A,” and remarks, “These three papers had almost nothing in common except that, in different ways, they were each excellent,” the instructor is using a Unique Uses perspective to do assessment.

Each of these two perspectives focuses on things that the other perspective would miss. A Unique Uses perspective is especially important in liberal and professional education: they both want to educate students to exercise judgment and make choices. If every student had the same experiences and outcomes, the experience would be training, not liberal or professional education.

Similarly, Unique Uses is important for transformative uses of technology in education, because many of those uses are intended to empower learners and their instructors. For example, when a faculty member assigns students to set their own topics and then use the library and the Internet to do their own research, some of the outcomes can only be assessed through a Unique Uses approach.

What are the basic steps for doing a Unique Uses evaluation?
  1. Pick a selection, probably a random selection, of users of the program (e.g., students).
  2. Use an outsider to ask them what the most important consequences have been from participating in the program, how they were achieved, and why the interviewee thinks their participation in the program helped cause those consequences (evidence).
  3. Use experts with experience in this type of program ( Eliot Eisner has called these kinds of people 'connoisseurs' because they have educated judgment honed by long experience) to analyze the interviews. For each user, the connoisseur would summarize a) the value of the outcome in the connoisseur's eyes, using a single or multiple rating scales
  4. The connoisseur would also comment on whether and how the program seems to have influenced the outcome for this individual, perhaps with suggestions for how the program could do better next time with this type of user.
  5. The connoisseur(s) then look for patterns in these evaluative narratives about individuals. For example, the connoisseur(s) might notice that many of the participants encountered problems when, in one way or another, their work carried them beyond the expertise of their instructors, and that instructors seemed to have no easy strategy for coping with that.
  6. Finally, the connoisseur(s) write a report to the program with a summary judgment, recommendations for improvement, or both, illustrated with data from relevant cases.
To repeat, a comprehensive evaluation of almost any academic program or service ought to have both Uniform Impact and Unique Uses components, because each type of study will pick up findings that the other will miss. Some programs (e.g. a faculty development program that works in an ad hoc manner with each faculty member requesting help) are best served if the evaluation is mostly about Unique Uses. A training program (e.g., most versions of Spanish 101) is probably best evaluated using mainly Uniform Impact methods. But most programs and services need some of each method.

There are subtle, important differences between these two perspectives. For example,
  • Defining “excellence”: In a Uniform Impact perspective, program excellence consists of producing great value-added (as measured along program goals) regardless of the characteristics or motivations of the incoming students. In contrast, program excellence in Unique Uses terms is measured in part by generativity: Shakespeare's plays are timeless classics in part because there are so many great, even surprising ways to enact them, even after 400 years. The producer, director and actors are unique users of the text.
  • Defining the 'technology”: From a Uniform Impact perspective, the technology will be the same for all users. From a Unique Uses perspective, one notices that different users make different choices of which technologies to use, how to use them, and how to use their products.
For more on our recommendations about how to design evaluations, especially studies of educational uses of technology, see the Flashlight Evaluation Handbook. The Flashlight Approach, a PDF in Section I, gives a summary of the key ideas.

Have any evaluations or assessments at your institution used Unique Uses methods? Should they in the future? Please click the comments button below and share your observations and reactions.

PS We're over 3,300 visits to http://bit.ly/ten_things_table. So far, however, most people seem to look at the summary and perhaps one essay. Come back, read more of these mini-essays, and share more of your own observations!

Monday, October 26, 2009

12. To evaluate ed tech, set learning goals & assess student progress toward them (OK but what does this approach miss?)

It's Monday so let's talk about another one of those things I no longer (quite) believe about evaluation of educational uses of technology. Definition: “Evaluation” for me is intentional, formal gathering of information about a program in order to make better decisions about that program.

In 1975, I was the institutional evaluator at The Evergreen State College in Olympia, Washington. I'd offer faculty help in answering their own questions about their own academic programs (a “program” is Evergreen's version of a course). Sometimes faculty would ask for help in framing a good evaluative question about their programs. I'd respond, “First, describe the skills, knowledge or other attributes that you want your students to gain from their experience in your program.”

“Define one or more Learning Objectives for your students” remains step 1 for most evaluations today, including (but not limited to) evaluating the good news and bad news about technology use in academic programs. In sections A-E of this series, I've described five families of outcomes (goals) of technology use, and suggested briefly how to assess each one.

However, outcomes assessment by itself provides little guidance for how to improve outcomes. So the next step is to identify the teaching/learning activities that should produce those desired outcomes. Then the evaluator gathers evidence about whether those activities have really happened, and, if not, why not. Evidence about activities can be extremely helpful in a) explaining outcomes, b) improving outcomes, c) investigating the strengths, weaknesses and value of technology (or any sort of resource or facility) for supporting those activities.

Let's illustrate this with an example.

Suppose, for example, that your institution has been experimenting with the use of online chats and emails to help students learn conversational Spanish. As the evaluator, you'd need to have a procedure for assessing each student's competence in understanding and speaking Spanish. Then you'd use that method to assess all students at the end of the program and perhaps also earlier (so you could see what they need at the beginning, how they're doing in the middle, and what they've each gained by the end).

You would also study how the students are using those online communications channels, what the strengths and weaknesses of each channel are for writing in Spanish, whether there is a relationship between each student's use of those channels and their progress in speaking Spanish, and so on.

Your findings from these studies will signal whether online communications are helping students learn to speak Spanish, and how to make the program work better in the future.

Notice that what I've said so far about designing evaluation is entirely defined by program goals.: the definition of goals sets the assessment agenda and also tells which activities are most important to study. I've labeled this the Uniform Impact perspective, because it assumes that the program's goals are what matter, and that those goals are the same for all students.

Does the Uniform Impact perspective describe the way assessment and evaluation are done? Do any assessments and evaluations that you know go beyond the suggestions above? (Please add your observations below by using the “Comments” button.)

PS. “Ten Things” is gaining readers! The alias for the table of contents – http://bit.ly/ten_things_table – has been clicked over 3,200 times already. Thanks! If you agree these are important questions for faculty and administrators to consider, please add your own observations to any of these posts, old or new, and spread the word about this series.

Wednesday, October 21, 2009

K. Evaluation should be mainly formative and should begin immediately.

Earlier, I described some old beliefs about program evaluation. I used to assume that evaluation of TLT had to be summative ("What did this program accomplish? Does that evidence indicate this program should be expanded and replicated? continued? or canceled?"). The most important data would measure program results (outcomes). You've got to wait years to achieve results (e.g., graduating students) and the first set of reults may be distorted by what was going on as the program started up. Consequently, I assumed, evaluation should be scheduled as late as possible in the program.

Many people still believe these propositions and others I mentioned earlier this week. I still get requests: "The deadline for this grant program is in two days. Here's our draft proposal. Out of our budget of $500,000, we've saved $5,000 for evaluation. If you're able to help us, please send us a copy of your standard evaluation plan."

Yug!

Stakeholders need to see what's going on so they can make better, less risky decisions about what to do next. Getting that kind of useful information is called “formative evaluation.” (By the way, a stakeholder is someone who affects or who is affected by a program: its faculty, the staff who provide it with services, its students, and its benefactors, for example.)

In the realm of teaching and learning with technology (TLT), formative evaluation is even more important than in other realms of education. The program is likely to be novel and rapidly changing, as technologies and circumstances change. So the stakeholders are on unfamiliar ground. Their years of experience may not provide reliable guides for what to do next. Formative evaluation can reduce their risks, and help them notice and seize emerging opportunities.

Formative evaluation also can attract stakeholders into helping the evaluator gather evidence. In contrast, summative evaluation is often seen as a threat by stakeholders. “The summative evaluation will tell us we're doing well (and we already know that). Or perhaps the evaluator will misunderstand what we're doing, creating the risk that our program will be cut or canceled before we have a chance to show what this idea can really do. And no one reads those summative reports anyway unless they're looking for an axe to use on a program. So, no, I don't want to spend time on this and, if I'm forced to cooperate, I have no reason to be honest.” In contrast, formative evaluations should be empowering - a good evaluation usually gives the various stakeholders information they need in order to get more value from their involvement with the program.

What many folks don't realize is that formative evaluation requires different kinds of data than summative evaluation does.

Summative evaluation usually must focus on data about results -- outcomes. But outcomes data by itself has little formative value. If you doubt that, consider that a faculty member has just discovered that the class average on the mid term exam was 57.432. Very precise outcomes data. But not very helpful guidance for figuring out how to teach better next week.

In contrast, a formative evaluation of an educational use of technology will often seek to discover a) what users are actually doing with the technology, and b) why they acted that way (which may have nothing to do with the technology itself). (For more guidance on designing such evaluations, see "The Flashlight Approach" and other chapters of the Flashlight Evaluation Handbook.

Corollary #1: The right time to start evaluating is always "now!" Because the focus is likely to be on activities, not technology, the evaluation of the activity can begin before new technologies or techniques go into use. Baseline data can be collected. And, even more importantly, the team can learn about factors that affect the activity (e.g. 'library research') long before new technology (e.g. new search tools) are acquired. This kind of evaluation can yield insights to assure that the new resources are used to best effect starting on day 1 of their availability.

Corollary #1: When creating an action plan or grant proposal, get an evaluator onto your planning team quickly. An experienced, skillful evaluator should be able to help you develop a more effective, safer action plan.

Corollary #2: when developing your budget, the money and effort needed for evaluation (what the military might call 'intelligence gathering”) may be substantial, especially if your program is breaking new ground.


What are the most helpful, influential evaluations you've seen in the TLT realm?
Did they look like this? What kind of information did they gather? Next week, I'm discuss how our current infatuation with learning objectives has overshadowed some very important kinds of evidence, and potentially discouraged us from grabbing some of the most important benefits of technology use in education, benefits that can't be measured by mass progress on outcomes.

Monday, October 19, 2009

11. Evaluating TLT: Suggestions to date, and some old beliefs

For the next couple weeks, I'll be writing about evaluation of eLearning, information literacy programs, high tech classrooms, and other educational uses of technology.

Actually, I've been commenting on evaluation in many of the prior posts, so let's begin with a restatement of suggestions I've made over the last 2 months in this blog series:
  1. Focus on what people are actually doing with help from technology (their activities, especially their repeated activities).
  2. Therefore, when a goal for the technology investment is to attract attention and resources for the academic program, gather data about whether program activities are establishing a sustainable lead over competitors, a lead that attracts attention and resources.
  3. When the goal for technology use is improved learning, focus on whether faculty teaching activities and student learning activities are changing, and whether technology is providing valuable leverage for those changes. (Also assess whether there have been qualitative as well as quantitative improvements in outcomes.)
  4. When the goal is improved access (who can enter and complete your program), measure not only numbers and types of people entering and completing but also study how the ways faculty, staff and students are using technology make the program more (or less) accessible and attractive (literally).
  5. When the goal is cost savings, create models of how people use their time as well as money. And focus on reducing uses of time that are burdensome, while maintaining or improving uses of time that are fulfilling.
  6. When the goal is time-saving, also notice how the saving of time may transform activities, as in the discussion of Reed College in the 1980s, where saving time in rewriting essays led to subtle, cumulative changes in the curriculum and, most likely, in the outcomes of a Reed education.
  7. Gains (and losses) in all the preceding dimensions can be related. So your evaluation plan should usually attend to many, or all, of these dimensions, even if the rationale for the original technology use focused on only one. For example, evaluations of eLearning programs should examine changes in learning, not just access. Evaluations of classroom technology should attend to accessibility, not just learning.
Years ago, I might have looked at a list like this, and also agreed that:
  1. Evaluation should assess outcomes. (how well did we do in the end?)
  2. Evaluation should therefore be done as late as possible in the life of the initiative or project, in order to give those results a chance to become visible (and to resolve startup problems that might have initially obscured what the technology investment could really achieve).
  3. Corollary: When writing a grant proposal, it's helpful to wait until you've virtually completed the project plan and budget before calling in someone like me to write an evaluation plan for you. Just ask the evaluator to contribute the usual boilerplate by tomorrow; after all, evaluation plans are pretty much alike, right?
  4. Corollary #2: If the project succeeds, it will be obvious. If it fails, evaluation can be a threat. So, when developing a budget for your project or program, first allocate every available dollar for the real work. Then budget any dollars that remain for the evaluation.
Do those last four points sound familiar? Have any of those four ideas produced evaluation findings that were worth the time and money? (Tell us about it.) When you plan a project, what purposes do you have for the evaluation? In a couple days, I'll suggest some alternative ideas for evaluation.

PS. This Friday, October 23, at 2 PM ET, please join Steve Gilbert and me online for a live discussion of some of these "Ten Things". Please register in advance by going to this web page, scrolling down to October 23, and following the instructions. And, to help us plan the event, tell us which ideas you'd especially like us to discuss.

Friday, October 16, 2009

Fundamental Question for Massive, Sudden Transition to Online Teaching/Learning


Today, Oct 16 2pm EDT Home base Web Page     Login
Suppose you are a faculty member teaching an on-campus course that has already begun and suddenly find that you cannot meet with your students for the next 3 weeks. What can you do online that would be better than this "generic assignment"?

Generic Assignment
For the next 3 weeks, read, watch, reflect/do, and write a paper - based on the syllabus you have already received for this course.
  • Read: Specific selections of text - in books, other printed format, or available on the Web;
  • Watch: Videos or other media available on the Web or television;
  • Reflect/Do: Answer these questions or do these problems; 
  • Write: A paper or report on a topic covered by the readings & media. 
What 3 things could most faculty members learn easily and quickly so they and their students: 

1. Use online options that most experience as worthwhile improvements on the generic assignment described above?
2. Worry less about being embarrassed by their first efforts to teach/learn online under these conditions?

NOTE: These "3 things" would be helpful for almost any efforts to enable and encourage more faculty members to try teaching online for the first time. Even just trying one or two online additions to a course they already teach on campus.

Thursday, October 15, 2009

J. Support: Teach Faculty to Solve Problems

“It just struck me the other day...Life is adversity. That is the meaning of life. We crave adversity. We need to get into trouble and stay in trouble...Teachers who retire go back to teaching because they need to be in trouble again."  - Garrison Keillor, “News from Lake Wobegon,” Feb 25, 2008


On Monday, I summarized my former belief that TLT units should teach faculty two things about emerging TLT ideas and material:
  1. Teach them enough about a new technique or technology so that they can decide whether to learn how to use it ("why") and, for those interested,
  2. 'How' to use it.
And my post assumed that it would be specialized, paid staff who would teach both kinds of lessons.

Those two kinds of support, why and how, aren't enough, however. Nor does any university have enough staff to provide the teaching and help needed for continual improvements in teaching and learning (with technology) across the curriculum. Let's start with the missing links in the content of support; then we'll conclude with a fresh look at who should provide that support.

WHY, HOW, AND (?)

To improve educational results, it's usually necessary to help faculty and students make qualitative changes in what they have been doing. Putting 'old wine into new bottles' isn't enough. Unfortunately, when faculty use technology to alter course activities, they can easily be ambushed.

Consider difficulties such as these:
  • A faculty member begins teaching online. Some students begin to fall behind.
  • An instructor adds some challenging new online assignments as homework, with the intent of building on that experience in class; but more students than usual arrive admitting that they haven't done their homework.
  • Students begin discussing issues in a chat room. The conversation splinters. Two students get into a violent argument.
  • In response to an assignment, students create web sites. Some projects are good on the content, but badly organized. Others are well organized and easy to navigate, but the substance is shallow. How should the projects be graded?
  • Each student or small group is working on a different topic of their own choosing. Many choose to work on problems that are each outside the faculty member's comfort zone. The instructor doesn't have time to do the reading needed to become sufficiently expert in all of these areas.
  • The instructor's teaching takes an adventurous turn. However, some students object that 'this is not how this course is supposed to be taught,' and complain to the dean. Student ratings of the course take a dive, and the faculty member's tenure case is coming up soon.
Many faculty resist a new TLT approach because they sense that it could lead to unexpected problems, and most professors and instructors know they're being offered no preparation for coping with those TLT dilemmas.

Here are a couple suggestions to help faculty deal with such problems:
  1. Organize faculty seminars to discuss case studies that each describe one such problem. Cases might be just a paragraph or two, briefly describing a problem, or a bit more elaborate (video clip; artifacts such as transcripts of online discussions). For each case, participating faculty discuss their own experiences: how they interpreted their version of that situation, how they responded, and what happened next. Usually most participants are surprised at how many different ways there are to interpret such a situation, and how many options there are to respond.
  2. After a little practice with such disguised case studies, it's easier to do what Steve Gilbert calls a 'clinic.' One of the participating faculty members describes a problem that he or she has seen personally, perhaps something that's troubling them now. Then the other participants share their own experiences with similar problems, and their suggestions for how to respond now.
Technology's role in academic improvement is analogous to the role of yeast in baking a cake. The staff in TLT support units need to be cake specialists, not just yeast specialists: they need extensive personal experience in using various technologies, old and new, for teaching and learning. But I don't know of any institution that has remotely enough staff to serve their faculty. That's especially true for programs that want to engage most or all of their faculty in improving teaching and learning (with technology).

FACULTY MUST SUPPORT ONE ANOTHER

If a program or institutions to improve teaching and learning on the large scale that technology enthusiasts hope to see, much of the help needs to come from the faculty themselves. That's true even if the TLT improvements are usually low risk, low cost, increments. The professional TLT staff's role should be to support, organize and sustain those faculty-to-faculty efforts. The only way for such mass engagement to happen is if faculty want it, and if their departments and the institution recognize and reward faculty who help their colleagues. [The focus of this post is how faculty can help one another. But that faculty effort can also be complemented with support from trained student technology assistants.]

Not all faculty need do the same things to help their colleagues. The scholarship of teaching and learning provides one set of possibilities. The teaching case study seminars above are another; the cases should be created and published by faculty (the clinic discussions should help identify candidates) and the seminars should be led by faculty. Similarly, 'scrounging' for TLT ideas and materials needs to be done mainly by faculty. And to develop the constellation of support workshops described earlier requires faculty participation as well.

I'm curious. Does your institution's support service for faculty go beyond the 'why' and the 'how?' Does your program encourage faculty to help each other? Can such faculty engagement be scaled up enough so that, over the years, a large proportion of the faculty can comfortably, cumulatively improve their courses?

Monday, October 12, 2009

Improving Teaching and Learning with Technology-Conflicting(?) Schools of Thought

Interesting post by Phil Long (University of Queensland, TLT Group Senior Consultant, and formerly Senior Strategist at MIT) about how to think about improving teaching and learning (with technology).

I think Phil, Steve Gilbert, and I each have slightly different views about how to proactively improve teaching and learning with technology (TLT) in an academic program. Dramatizing our disagreement will, I hope, be an aid to deepening and widening the conversation. Here's my summary of what each of the three of us currently think:
  1. Wait until external conditions are really demanding (a near crisis, perhaps). Then marshall your forces and push for a big change that responds to that crisis. A big change might be, for example, a combination of a curricular redesign, a fresh approach to teaching and learning, and the facilities to support both of them.
    If there is no external pressure, try rallying staff effort and resources around an inspiring vision of the future. Use that enthusiasm to create change that will last. Change will come faster when change agents can take advantage of a crisis, however. An evolutionary metaphor is suggested by Phil and by a comment by Trent Batson about Phil's post. I think that's misleading, however. Evolution is a 'mindless' metaphor to apply to programs, but Phil (and I) each tend to think in terms of faculty and staff who are trying to change the larger institution or program of which they are a part. (Phil Long, as translated by SteveE)
  2. In contrast, Steve Gilbert has been working to promoteevolution in small steps, an inductive approach to improvement that emerges from relatively independent actions taken by each faculty member. SteveG suggests that staff help each faculty member find or invent small steps that make sense to that individual faculty member. Then help them use feedback to guide what they're doing. Finally, help them each to share their ideas and materials with a few more colleagues who can quickly adapt them with little or no risk or expense.
    SteveG rarely talks about helping faculty to change in any particular direction. I think he's wary of the lure of Big Changes. Remember what Newton said: Every action causes an equal and opposite reaction. Big pushes create big pushback. The small approach is sneakier, producing change that is too invisible, and too grounded in faculty freedom, for anyone to oppose. (Steve Gilbert, as translated by SteveE)
  3. Here's my perspective: identify small steps being made by faculty (here SteveG and I agree). Then try to spot a subset of those changes that could be the beginning of something big and important for the programs' students, faculty and other stakeholders. Then start consciously supporting progress in that direction through small steps and, where warranted, big steps. When identifying directions for improvement, pay special attention to outside pressures and rewards:e.g., falling enrollments and the potential to increase enrollment; trends in thinking in the discipline. (SteveE)
Do you buy any of these strategies? Have a fourth to suggest? or perhaps you think the whole idea of a proactive strategy to improve teaching and learning is futile?

10. TLT support: Why and How

You can't understand teaching if you ignore learning. And you can't understand either unless you pay attention to the facilities, resources, and tools used to accomplish them: classrooms and computers, libraries and the web, and other such 'technologies.' At one time staff could ignore classrooms, textbooks, and other traditional technologies because the choices were few, and universally familiar. That's no longer true. Especially in the last decade, the options have multiplied. Because these technology options are not equally good, equally easy, or equally inexpensive, the choice of technologies requires conscious attention, just as teaching and learning themselves do.

That close relationship of teaching, learning, and their technologies
is one reason why it's important for institutions to have units that
function as TLT Centers, real or virtual.
A virtual TLT Center is a constellation of two or more units such as faculty development, technology support, the library, the facilities program that supports classrooms, distance learning, and departmental TLT experts -- units that work so closely together that they act like a single service provider. For example, their staffs continually learn about each other's resources and from one another's experiences; that way each staff member can draw on all the capabilities of the virtual center.
These things I do believe.

But some of my beliefs have changed. I once believed that, when helping faculty, TLT staff needed to focus on (just) two things:
  1. WHY: Teach enough about a new technology and its teaching/learning uses so that instructors would want to learn more, and, for those who are persuaded,
  2. HOW to teach in those ways.
Is that a good summary of the kinds of help that TLT staff provide faculty at your institution? Or is there something additional that faculty are taught about emerging TLT topics? Please post your observation by clicking 'comments' below.

My second old belief was that support for faculty should be provided directly and entirely by experts in TLT support. At your institution are there people in addition to TLT staff who provide such support?

My third old belief is that this training should be entirely interdisciplinary: faculty are specialized by discipline but TLT staff are not. So this faculty support service should be 'one size fits all departments.' Is that true at your institution?

PS Anyone who knows the work of "the Steves" knows how many of the thoughts in this series come wholly or partly from Steve Gilbert. Our thinking has been so intertwined over so many years that it's not even possible to point out which of the observations in this series originated from him and which from me.

PPS You probably know that this post is part of a series called 'Ten Things I (no longer) Believe about Transforming Teaching and Learning with Technology.' If you like these posts, please spread the word. Perhaps you can use these ideas to help with a more intentional approach to TLT planning.

And join us online for a free, live discussion of these issues on Friday, October 23, at 2 PM ET. It's part of our FridayLive series. If you don't already have a FastPass, click here to register. Thanks!

Wednesday, October 07, 2009

I. Programs make faster, better educational progress when they're world class scroungers

Earlier this week I described my mistaken belief that one should pay most attention to the newest ideas, especially if you can create your own idea or phrase, or at least your own wrinkle, and then claim the credit for being first.

The folly of that belief was pounded home for me in 1996. That was the year that Arthur Chickering suggested that we write an article on how to use technology to implement the 'seven principles of good practice in undergraduate education.' He and Zelda Gamson had summarized these seven lessons from educational research a decade earlier.

I replied that such an article was unnecessary. “Everyone knows how to do this already,' I told him. 'According to your seven principles, when students cooperate, educational outcomes usually improve. Anyone can see that using email can provide new avenues for students to cooperate. And the kinds of complex, real world projects made possible by computing often compel students to work in teams. Who needs an article to tell them that? It's old news.” Chickering persisted. So we wrote the article, got it published in a little newsletter, and soon put it on the web. Very quickly, 100 people per month were visiting our article. Then 200, and 400. A decade later, over 5,000 people per month were taking a look at it . Not bad for an article about ideas so obvious that I'd thought an article totally unnecessary.

Crucial question: How do you spread ideas and skills from the 5% of faculty for whom they're old news to those who would also respond, “That's wonderful!” if they ever heard about the idea or tried the skill? These blog posts are about using technologies in a way that can improve what's learned, who learns, and how they learn. To achieve that kind of change, engaging large numbers of mainstream faculty can be important. Each of them may not need to change what they're doing very much, but they each would probably need to change a little. Suppose it's a change they'd like if they ever heard about it; how can we help them notice the possibility in time?

Steve Gilbert, Flora McMartin and I did a major research study for MIT and Microsoft several years ago. Microsoft had made a multi-year, $25 million grant to MIT, and chunks of that money were being awarded to MIT faculty to do pioneering projects involving educational uses of technology. Our research: discover factors that influenced whether the best of these innovations were ever used by faculty other than their original developers.

One story from this MIT/Microsoft study suggests an important lesson for any program that wants to accelerate the pace of improving teaching and learning with technology.

Pete Donaldson is a Shakespeare scholar at MIT. For years before the Microsoft grant became available to MIT faculty, Pete had been experimenting with ways for his students to use film clips (without violating copyright) in their papers and online discussions. He'd had some success, enough to give workshops on the topic and to be a keynoter at the Shakespeare Association of America, where he gave a spectacular demonstration. His use of video clips, however, relied on an assembly of expensive equipment. Then he received a grant from the MIT/Microsoft iCampus program. The support enabled programmers to work with him, and to figure out a much more inexpensive strategy. The resulting software was called the Cross Media Annotation System (XMAS). Pete used the SHAKSPER, a popular listserv in the Shakespeare community and a mailing list of people who had attended his prior workshops to ask if anyone would like to use this free service in order to incorporate film clips into their Shakespeare courses. Quite a few did, especially because they knew and trusted Pete. One comment we heard from several adaptors: Pete wasn't threatening because he wasn't a techie, himself. He was like them. So if he could use XMAS, so could they.

The story is not all success. XMAS ought to be a great tool for film courses taught by film scholars, even more than for Shakespeare courses taught by English professors.

But Pete Donaldson is not a member of that community of film scholars, doesn't go to their conferences, doesn't know their listservs, and doesn't write in their journals. Nor do the other English faculty he has helped.

At some point, XMAS and Donaldson's techniques for using it may be adapted by a film scholar who, like Pete, uses the idea for teaching and for research and who, like Pete, has a yen to help his or her colleagues. And then the use of XMAS may begin spreading like a virus in that community.

Let's pull these threads together.

In the real world, instructors rarely have much time to uncover new ideas. Nor can they can take many risks (e.g., fear of embarrassment, wasted time when they're already over-committed, risk to a tenure case). That's one reason why new ideas about teaching and learning tend to spread so slowly. However, it can help to hear about such ideas from peers with a reputation for this kind of improvement (especially from peers who teach similar courses to similar students, even at other institutions).

Therefore, I suggest that any institution that wants to make unusual progress in TLT ought to help create and sustain faculty learning communities whose members often (a) teach similar courses, and (b) come from different institutions. If those similar courses have similar students, and the faculty have similar styles, so much the better. That way, if one faculty member has an idea, or uses a technology, or has a puzzling experience, it should be relatively easy for others to emulate. And, by including faculty from other institutions, you and your colleagues will hear about new low threshold steps much more quickly.

You can't search everywhere for everything. That's another reason why it's so important to set one or two focused priorities. Those priorities should help faculty and staff focus their searches for ideas. Become a world class scrounger and borrower of appropriate teaching ideas and materials from around the world! Ironically, that's also a great way for faculty members and their program to get a reputation as world class innovators.

PS. If you don't have much money, search for great ideas in countries where money has been scarce for some time.

Monday, October 05, 2009

Group Nanovation = Open House?

From Steve Gilbert
Extending impact beyond the event.
An "Open House" can extend faculty sharing beyond the location and schedule of the event itself.  What could be done DURING and AFTER the event to enable and encourage MORE faculty members to take advantage of the options offered in that event to improve teaching and learning with technology?  To try some of those improvements more than once?  To collect feedback about their own attempts?  To help some colleagues do the same?
Dave MacInnes (Guilford College) described several key factors during our online discussion of successful Nanovation last Friday (10/2/2009).  For a few other lessons we learned (obstacles, ideas, strategies), digital archive, text chat transcript, etc., from that session, click here .  And watch for future Frugal Innovation sessions.   WHAT OTHER WAYS COULD HELP EXTEND THE REACH OF AN "OPEN HOUSE"? 

Dave's recommendations for a Fluidly Structured Event with Carefully Selected Faculty Presenters.
  • Schedule:  1-2 hours?  No absolute starting or ending time for participants - can enter or leave whenever they wish, stay as long or briefly as they wish - low risk of being "trapped" and wasting time!  Offer enough variety to engage most participants for 20-40 minutes if they wish to stay that long.
  • "Presenters":  Feature faculty members who are already recognized for strong personal interests in relevant topics, issues, etc..  Identify and invite 6-8 faculty members to be presenter/mentors DURING the event.  Faculty members who are likely to be respected by colleagues and who are likely to be willing and able to respond to colleagues' subsequent requests for help with similar tasks AFTER the event.  
  • Mixture of major and minor presentations:  Include some presenters/presentations about
    A.  "big" topics - activities or skills that take some substantial effort, time and obviously result in substantial changes in teaching/learning;  and
    B.  "small" topics - LTAs, potential Nanovations - that can be introduced or "gotten" in a few minutes
  • Like Poster Sessions:  Encourage some presenters to prepare as if they were offering a poster session at an academic conference.  Prepare some visual display and/or handouts to enable passers-by to make quick decision, quickly get enough to permit follow-up activities;  prepare to introduce the main ideas, resources in a few minutes (<5).  [This item added after the online session by Steve Gilbert]
  • Advertise: Use email to advertise to whole faculty - emphasize flexible timing
  • Location:  Multiple rooms - further reduces fear of getting stuck in a session;  emphasizes idea of multiple options available to meet varied needs, interests
  • Repetition:  At least once per year.   Build expectations and reputation of providing useful info without wasting time of presenters or participants.

9. We are unique. Avoid 'not invented here.' (NOT)

Monday posts in this series describe things I no longer believe, things that relate to making major improvements in teaching and learning by taking advantage of technology. Here's a big one.

"Our program is unique. And, so far as we know, no one else is yet doing what we propose to do." So far as I know, I am the inventor of that phrase. I coined it in 1977, while writing a grant proposal. Our proposal emphasized our college's uniqueness in higher education, and the uniqueness of our proposed project.

I was especially proud of that phrase, 'so far as we know:' It was a truthful way of halfway admitting that my 'literature search' had not been very thorough. Today, I still remember that I was worried that, if I were to search more energetically, I might discover that someone else had already used the educational idea that we were proposing. And if someone else were already doing it, we'd have to abandon my grant proposal, right? What funder would be interested in supporting the second institution to try a not-absolutely-new idea?

That was part of a cluster of related beliefs that I held:
  1. My institution is unique (or at least highly unusual.)
  2. The newest idea is the most important idea. Even if it's not truly new, pretend it is. In higher education, we get energy from changing what we do from time to time, even if we change from A to B and, after memories have faded and new people have joined the staff, back to A again.
  3. To get a grant, it's important to be first (at least the first of any program like yours, of which there are almost none). Here's more on the goal of being first with a new technology, another belief that I now think is deceptive.
  4. Don't do anything that was not invented here; we're unique so it won't work here (or, by admitting it came from elsewhere, we lose the chance to say we invented this version ourselves).
  5. A technology correlate: when a new technology or teaching technique appears, investigate it by spending time and money to pilot test it locally. What you can learn from a single local pilot test is far more valuable and relevant than what you could learn by spending time and money to discover what 50 people learned by testing it at other institutions.
Do these propositions make sense to you? Why or why not? (Click “COMMENT” below to leave a post.) Later this week I'll discuss what I'd suggest now, instead, about 'not invented here' and its implications for a counter-intuitive approach to course improvement. This post will also build on last week's post recommending the Treblig Cycle.

Saturday, October 03, 2009

H. Faculty support for programmatic improvement: The Treblig Cycle

This "Ten Things" series of posts is discussing some counter-intuitive ideas about how technology can enable major, long term improvement in academic programs: improvements in what is learned, who learns, and how they learn.

Such deep programmatic improvements are more likely to develop when most faculty feel that a change is important enough to warrant patient, persistent effort over a period of years. In these days, when money is tight and competition ferocious for many academic programs, an unusual number of faculty may feel this way.

What kind of faculty support could help such sweeping programmatic improvements develop?

The most problematic requirement for such faculty support is scale: the need to involve most faculty in this academic program. If the program's leaders hope to improve what's learned, who learns, and how they learn, they need to help most faculty develop some new skills, tools, and materials.

My colleague, Steve Gilbert, has been developing the concept of 'frugal innovations,' innovations that can work, and spread, when time and money are scarce. He recommends a cycle of individual improvement and peer-to-peer sharing of those experiences and mateirals. He has called this process 'nanovation.' Lewis Hyde would call it a 'circle of gifts.' I call it the Treblig Cycle (pronounced treb'lig). If you're curious why I suggest that name, read this article'.

As I interpret what Steve has been saying, the Treblig Cycle consists five steps:
  1. A faculty member learns about (or invents, or reinvents) an improvement for teaching and learning with technology. The materials or tools needed should be freely available or nearly free to this faculty member and his/her colleagues. To make this cycle work, the improvement should also be low risk, obviously rewarding, possibly time-saving, and easy to learn. Steve has called such tools and materials “Low Threshold Applications” and such improvement ideas “Low Threshold Activities.” We usually refer to both as “LTAs”. "Low threshold" is a relative term, not an absolute. Something which is low threshold for some people in their institutional context may be expensive, high risk, or too hard to learn for other faculty in a different institutional context. For the Treblig cycle to work, however, the improvement must be low threshold for most people who learn about it. And, for the Treblig Cycle to help the strategic change of interest, this particular LTA should be an incremental step in that direction. If the faculty are trying to slowly and, eventually, dramatically improve the creative skills of their graduates, for example, then this LTA should help advance that effort just a little bit: a tiny step in the right direction. The fact that many faculty agree that this programmatic change is important - that's one of the things that attracts their attention to this LTA.
  2. The instructor tries the improvement, and finds it rewarding. (If it weren't rewarding for him or her, the process would stop here.)
  3. He or she tries the idea again, gathering feedback to guide the activity and/or to describe its outcomes;
  4. In the process of trying the idea, he or she may also tweak, personalize or otherwise improve it;
  5. He or she helps at least two colleagues inside or outside the institution to begin this same cycle; in other words, these colleagues are now at step 1 of the Treblig Cycle. If each of them in turn gets two or more colleagues to enter the cycle, the low threshold improvement will spread in an accelerating way.
The Treblig Cycle is more likely to work if the environment rewards (a) faculty sharing information with colleagues inside and outside of the institution, (b) improvements that aid the chosen programmatic goal.

Obviously, many improvements can't be spread by the Therblig Cycle. Some improvements don't meet a widely felt need so they won't be rewarding enough to excite their users to share them. Other improvements aren't low threshold for most people.

So, if your academic program is developing a strategic academic/technology plan for the next 5-10 years, or considering which of several strategic options to choose, ask whether each proposed strategic change could be implemented with the help of the Treblig Cycle.

Relying on the Treblig Cycle does not eliminate the need for faculty support units. Quite the contrary. Faculty support units can use the Treblig Cycle as a tool for supporting faculty. For example, the faculty support unit could search for relevant LTAs, could create materials describing the LTAs, and find more ways to encourage faculty to share such ideas. (We'll return to some of these themes in coming weeks.)

To summarize: crucial elements for applying the Treblig cycle to transformative uses of technology are (a) agreeing on a direction for change that reflect widely felt needs among the faculty, (b) collecting Low Threshold Applications and Activities that many faculty would find rewarding, and (c) encouraging the sharing of such ideas and movement in those directions.

Your comments? Can you imagine an academic program or institution using the Treblig Cycle to support a 5-10 effort to transform itself, eg., internationalizing its curriculum? Developing a world class reputation for the design skills of its graduates? Does the Treblig Cycle suggest a reasonable route to a slow revolution?

Thursday, October 01, 2009

Ever Nanovated?

From Steve Gilbert
Have you ever nanovated?    
  • Tried an improvement in teaching/learning with technology - more than once?
        [Alternative:  Tried an improvement once and never again?]  
  • Gotten some feedback about that improvement and changed it?
        [Alternative:  Didn't get any feedback or ignored feedback?]
  • Helped at least two colleagues make similar improvements - in ways that made it likely they would try it more than once, collect feedback, help at least two more colleagues... etc.?
        [Alternative:  Didn't help anyone?  Helped some colleagues but they didn't help others?]
Help!  Survey!  
Please respond to our brief online survey about nanovation. 
Your responses will help us prepare for online discussions of examples and factors that support or hinder nanovations.  

Join the first, probably most exploratory, of these online discussions tomorrow: 
Note:  Do you have other ways of describing or confirming a successful dissemination and use of an improvement in teaching and learning with technology?  What other ways have you used for identifying successful dissemination and use of improvements in teaching and learning with technology?