Worst-Ever Unsubscribe Experience

Every now and then, I let a retail cashier sign me up for the company marketing e-mail newsletter, mostly to see what they’re doing for e-mail marketing. OMG! I have to say that Books-a-Million (BAM!) has won an award with me today.

I decided, based on sheer volume of e-mail and the ease of ordering from Amazon, to unsubscribe from BAM’s mailing lists. You know how it’s supposed to go; there are standard practices: Click the unsubscribe link and you get a message, “You have been unsubscribed.” Easy-peasy. Not so with BAM.

Here is the footer of the last e-mail message I ever wanted to receive from BAM:


It says, "Click here, and you'll fill out a new e-mail message."

So, if I click on the link and a new message is supposed to open, that tells me that the code underneath must be:

<a href="mailto:unsubscribe@booksamillion.com">here</a>

…or something like that. No. It’s actually a link to:


When you click it, a new tab opens, and several minutes–minutes!–later you see:


Whose options are THESE???


…and so, I had to select “unsubscribe” from every little drop-down, for unintelligible mailing lists. It was easy, since I’d already decided I never wanted to receive anything from BAM again. And another several minutes later, the page confirmed thusly:


Oh, good. It worked...?

Now, there are too many things wrong with this scenario to list them all, so I shake my head and wonder: “Why would any developer build such a cryptic and horrible interface?–too awful to contemplate.

Books-a-Million, you have a serious problem in your e-mail marketing department. I hope you can track it down.

Postscript: Several HOURS later, I noticed that I had to click the “unsubscribe” button AGAIN to make it final. Sheesh!

The Trouble with Semantic Markup: Response to schema.org

First thing this morning, checking in on the Twitter streams, I saw Jeff Evans (@joffaboy) announce the article, “Google, Bing & Yahoo’s New Schema.org Creates New Standards for Web Content Markup.”

Initial tweet

My heart began pounding as soon as I read the title. The arch-rivals of search, the biggest dogs in the yard, the great institutions of the web were collaborating to propose a solution to the problem of markup that has plagued me from the beginning: Markup doesn’t really address the substance of the web, just its most basic structure. My hopes were further raised by the mention of a “recipe” content type, which if you follow my writings, you’ll recognize as a regular example.

I retweeted in a flash: This is what I’ve been looking for!

My first retweet

Then, I visited schema.org, and all my hopes came crashing to Earth again. The Search Giant monsters have created a new monster.

My second retweet

Quick Overview

As I understand it, schema.org is proposing additions to HTML that the “Big Three” search engines are going to interpret, in order to improve the accuracy of search results. By augmenting the markup in web content, they are together settling on a standard vocabulary, so that they will all be recognizing the same language. Presumably, once they’ve built this standard language into their sorting algorithms, any content that has these augmentations will rise to the top of search results, above content that doesn’t.

In principle, that sounds good, doesn’t it?

I’d like to offer some reflections on a few practical implications of this effort.

Corporations try to head off the “free” Semantic Web

For-profit companies have been watching in dismay for twenty years the rise of the “free” WorldWide Web. Content is free. Software is free. Social Networking is free. And more and more of the web is being driven by “free” efforts, like the WorldWideWeb Consortium. Volunteerism is a huge threat to capitalism, and they know it.

Among the greatest of these free efforts is the quest for the Semantic Web, which in its simplest terms, seeks a set of standards for describing the meaning of content. Human language is always problematic—as are those who use it—because words are never just words. The meaning of words is rich, contextual, ambiguous, and worst of all, ever changing. There are a lot of really, really smart people, all over the world, almost exclusively volunteer (with some corporate support), working hard to figure this out. If you want to get a sense of the complexity of it all, talk to Rachel Lovinger (@rlovinger) at Razorfish. She’s one of the true semantic geeks, and I’ll just have to take her word on most of what she says. She’s fab.

But instead of supporting this “free” effort, the Search Giants have imposed a de facto standard for the Semantic Web, and they’re pushing it with the strength of their size and popularity. Like the Zen question of the tree in the forest:

If a search engine doesn’t support your semantic standard, will anyone find your content?

I am suspicious of their motives. I read it as an effort to bypass all the work that’s already gone into the Semantic Web.

Markup is more than basic structure and presentation

It has been a great struggle since the beginning of the web to strike the appropriate balance between the structure of content and its presentation. In other words, what content is should be distinct from how content looks. But HTML—even up to HTML5—still only addresses the most basic aspects of content, and even now, offers only tags that address the pieces of the “webpage”—like the “header” and “navigation.” There isn’t markup to describe the content’s substance.

CSS as semantic markers

Cascading Stylesheets, in a roundabout way is one approach to the problem, although it’s originally meant to control the presentation of the content. Let me give an example.

Lists are a primary content structure. We create lists for everything—ingredients, footnotes, archives, contacts, links, Q&A, references, etcetera ad nauseum—but HTML offers us only two choices: “Ordered lists” (numbered) and “Unordered lists” (bulleted).

If your website had a list of links in a sidebar and a list of staff names on a contact page, you use the same basic markup:

    <li><a href= “http://url.for.link/1” title= “This is the first list item”>Link Text 1</a></li>
    <li><a href= “http://url.for.link/2” title= “This is the second list item”>Link Text 2</a></li>

…and then…

    <li>Contact Name 1</li>
    <li>Contact Name 2</li>

Here’s the problem: The web browser has a default way of rendering these lists, and they will look exactly the same, except that the links will be underlined. If you want to distinguish them from each other, you can add CSS classes, which give you a way to style them differently.

Now, CSS gurus (the best of whom are really content strategists underneath it all) will tell you that you should NEVER use class names that describe how something looks, like “class= ‘blue_text’.” The class names should describe what they are, which is, in fact, a semantic indication:

<ul class="links”>


<ul class=“contacts”>

Using these identifiers, the designer can define precisely how each component of a website should look. In a better world, however, they could also be used to identify what they are. Defining standard CSS classes and identifiers as part of XHTML would be one approach to encoding the meaning into markup.

But not Google, Bing, and Yahoo—Noooooooo.

The Search Giants, though, instead of building on CSS or any other existing approach, have introduced another “standard,” which superimposes another layer of markup on top of the feeble XHTML we already have. Here is the example from schema.org:

    <span>Director: James Cameron (born August 16, 1954)</span>
    <span>Science fiction</span>
    <a href="../movies/avatar-theatrical-trailer.html">Trailer</a>

Before I go any further, I have to say that this code doesn’t look like any real XHTML I’ve ever seen, and that’s a worry right from the start. Nevertheless…

Once they’ve applied their markup augmentations, again right from schema.org, it becomes:

<div itemscope itemtype="http://schema.org/Movie">
    <h1 itemprop="name">Avatar</h1>
    <div itemprop="director" itemscope itemtype="http://schema.org/Person">
    Director: <span itemprop="name">James Cameron</span> (born <span itemprop="birthDate">August 16, 1954)</span>

    <span itemprop="genre">Science fiction</span>
    <a href="../movies/avatar-theatrical-trailer.html" itemprop="trailer">Trailer</a>

There are many, many, many things wrong with this picture.

All the complexity of XML without any of its simplicity

XML is the mother of all markup. In fact, XHTML is just one markup language based on the XML standard. Using XML as the basis of your web code is an elegant—but very complex—solution to defining your content. When it’s all worked out, however, it lets you replace that gobbledygook above with something more like this:

        <name>James Cameron</name>
        <birthdate> August 16, 1954</birthdate>
    <genre>Science fiction</genre>
    <trailer url= “../movies/avatar-theatrical-trailer.html” />

Putting it simply, by augmenting XHTML with another layer of markup, the Search Giants have complicated the code immensely, making it just as complex as if they had done it in XML, but without any of the benefits of XML’s simple elegance.

Content is rarely this simple

The examples above deceive us, in any case: Yes, we can add fields to CMS templates for isolated metadata like “title” and “director,” but what about the main content itself? What about the meaning embedded in the article? Let’s say we’re writing an article about motion picture history, and we include the following sentence:

<p>James Cameron, best known for directing the sci-fi thriller,
“Avatar,” was born on August 16, 1954.</p>

All of the information in the schema.org example is present in that sentence, and if we were searching for content about James Cameron, we would have to rely on full-text searching.

If we were to use the schema.org augmentation, in order to make it all accessible to the search engines, it would get very messy, something like:

    <span itemscope itemtype ="http://schema.org/Movie">
        <span itemprop="director" itemscope itemtype="http://schema.org/Person">
        James Cameron
best known for directing the
    <span itemscope itemtype ="http://schema.org/Movie">
        <span itemprop="genre">sci-fi thriller</span>,
        <span itemprop="name”>Avatar</span>
,” was born on
    <span itemscope itemtype ="http://schema.org/Movie">
        <span itemprop="director" itemscope itemtype="http://schema.org/Person">
        <span itemprop="birthDate">August 16, 1954</span>

Not for mere mortal content authors

Now we come to the main practicality of content: Content authors.

I have marked up a lot of content in my career, and I am an obsessive, precise, exacting author. On the other hand, I’ve implemented CMS templates and tried to configure the best WYSIWYG editors to be able to apply the right CSS classes within content. And I’ve worked with a lot of content owners to teach them the importance of good markup.

Here’s the hard reality: No matter how powerful the technology, no matter how carefully designed and coded the CMS templates, no matter how sophisticated the WYSIWYG editor, and no matter how much training we offer, any markup will ultimately succeed or fail on the content authors’ ability to use it.

And that brings me to my main issue with the Semantic Web.

The Semantic Web cannot rely on encoding alone

If the main difficulty of searching the web is in understanding the meaning of the content (given all the languages, people, markup skill, and so many more factors), then we can really only solve it the hard way: Intelligent reading. We cannot rely on the human beings who create content to make it speak for itself, by making sure that everything is tagged correctly. They just can’t do it.

We cannot rely on markup because XHTML is insufficient, XML is too complicated for more than data structures, and the schema.org effort is unrealistic. In the end, each method may play a limited role in addressing the findability of content, but ultimately, it will require some other kind of intelligence—intelligence in the interpreting of meaning, rather than its encoding.

I don’t know what will happen with the schema.org markup augmentations. Personally, I hope that it just sags under its own weight and disappears into the marshes from whence it came. And I heartily encourage all the folks who are working on this problem to keep at it: There’s no path to success here but the long one. Eventually, perhaps new kinds of computers will be able to understand us weird, wonderful human beings, but for now, we remain inscrutable to the mechanical, algorithmic mind.

Content Strategy: A brief history of the Web

Fish/Bird Tessellation by Escher

I first discovered I was a content strategy guy at the 2009 IA Summit, at which Kristina Halvorson (@halvorson), CEO of BrainTraffic, organized the “Content Strategy Consortium.” It was a natural awakening for me, as I believe it is for others, since I’ve been working with content all my life in one context or another. When content strategists talk, I find myself nodding: I recognize their stories instinctively as part of my own.

The idea of content strategy isn’t as easy to grasp for others, and that’s ok because some folks appreciate naming things and seeing them all in relation to one another. It’s a kind of wayfinding in the growing complexity of the digital age.

This conversation is ongoing, and most recently I found it in the Boxes and Arrows group on LinkedIn: Why IA Should Focus on Content

Here are a few snippets from the conversation:

Regan McClure • I completely agree. I’ve been mulling over the IA/UI/UX/CS job titles for a while (because I do all of them and can’t answer well when people ask for my job description) and I really can’t decide because they are ALL related. Inherently. Completely. Fundamentally. […]

Laura Hampton • Labelling does create an issue, agreed. When everything becomes so integrated it’s important for each area to retain its own merits but it is also essential that the overlap between them all is well communicated.

Kathrin Peek • There is also the discipline of content strategy to consider. This in and of itself is an even younger discipline than IA but ultimately evolved from it and the very need to focus on the content elements themselves. It’s a subset of UX from my perspective[…]

A year ago, Dan Brown (@brownorama), experience designer and founder of Eight Shapes, wrote a Letter to a Content Strategist on his blog, GreenOnions.com in which he said:

“But, aside from the composition of content, content strategists haven’t (to my satisfaction anyway) defined what it is they design, what’s the output of their work.”

It’s a good article, and I recommend it because he talks about what the other design disciplines need from the Content Strategist. I trust that better apologists than I have explained it to his satisfaction, but I’m going to pick up the gauntlet here anyway, perhaps better a little late than never.

Content Strategy in an Oversimplified Historical Context

I find it helpful to talk about the rise of Content Strategy as simply another thread in the spinning of the Worldwide Web. It is, of course, a gross oversimplification, but I have observed that we, as “WWWWorkers,” have always defined new disciplines to help us come to terms with each new BIG THING we’ve learned about our craft. In a continual dance among engineering and design, culture and communication, we have expanded our skills, each according to his/her own gifts. There has always been room for one more…

In the beginning…the Net

In the beginning of the Web, there was the infrastructure: Networks and protocols. It was exciting back in the early 80s that you could type some words on one computer, and they could be transmitted to another computer somewhere on the Internet. E-mail became the new telephone, and we figured out all sorts of ways to use it. We stored files on servers and used elementary browsers to list them. These were the heydays of some really powerful communication forms, like listservs, gopher sites, USENet, and IRC. With the possible exception of Gopher, these are all still going strong.

And there were links…

Then arrived the hypertext transfer protocol (http://), and we began to structure simple text documents, so that browsers could render them. Hyperlinks began to tie the Web together, and everyone was excited about publishing “personal web pages.” We learned basic HTML (or installed an MS Word add-on, so that it could save HTML), and we created websites of a few pages, all of which were lovingly hand-crafted—and painstakingly and painfully maintained!

Xtreme makeover

But webpages were ugly. They were U-G-L-Y, and there was no consistency across websites because we had no conventions beyond the obligatory “about us” page. In the 90s, we grew impatient with web pages because we couldn’t express ourselves as creatively as we could in other media, like print. So graphic artists began to apply their skills to the visual design of the Web, pushing the boundaries of markup, learning to make things more visually attractive, and establishing some standards for page parts—headers, footers, etc. In their turn, web browsers improved to render what the designers and developers created.

Pretty, but DUMB

As the Web grew, however, we learned that the more information we packed into websites, the more we tried to make them do, and the more we tried to give people a real, touchable experience, the harder and more frustrating it became for the users. We had to face up to the fact that just because websites were beautiful did not mean that people liked using them. They couldn’t find what they wanted, and they were foiled by navigation sequences that didn’t make any sense to them. Fifteen years ago, Vincent Flanders founded Websites That Suck, the original rogues gallery of bad design. (Unfortunately, he still has plenty of material to publish.)

So were born the twin disciplines of Information Architecture and Interaction Design, who along with their elder first-cousin Usability, went to work on restructuring and reinventing the web experience. They brought rigor not only to the structure of sites and consistency to interactive forms, but they recognized the importance of testing the sites on real people. Processes, methods, and tools arose to bring consistency to the disciplines themselves. These fields drew on all sorts of existing disciplines—graphic design, library and information science, engineering, and programming—so that they could stand up for themselves and point to what was important.

Dogs wagging tails

But at the turn of the 21st Century, it became unavoidably apparent that although a certain amount of design could be successful based on one’s own insight and the information itself, we still had little understanding of the people who actually were using the websites. So the discipline of User Experience arrived to embrace and extend IA, IxD, and Usability. Now, we understood that we needed to study the people on the other side of the code—their contexts, their goals, their preferences—in order to create usable sites. Personas and other models grew up to inform all the decisions that we were making.

Yes, but “content is king”

Now, in the latter half of the 00s, we have reached conclusion that we have not paid sufficient attention—have not applied sufficient rigor—to the actual substance of our websites, nor to its appropriateness to our reasons for wanting websites in the first place. And not only on our websites, but in our applications, in our marketing materials, and in our documentation.

In its broadest sense, the disciplines of content strategy (and there are quite a few) add structure, rigor, and discipline to all the questions about content: What content will help us reach the people we intend? What do they need to know from us? What information will best support our audiences and bring us long-term success?

What do we design?

We design the processes by which organizations decide what content they should publish to meet both their business needs and those of their customers; how they will create that content to ensure that it maintains the right voice, message, and perspective; how that content should be matched to delivery channels and measured for effectiveness; and finally, how that content is managed, refreshed, and retired.

Why do we need a “new discipline?”

There are indeed areas in which all these disciplines overlap, and every content strategist brings a wealth of other disciplines and experiences to bear. Stop content strategists on the street, and they will probably tell you that there is nothing “new” here. Knowing what to call something, however, and being able to draw broad lines around its parts, can help us to focus our attention on it. Erin Kissane (@kissane), in her wonderful and succinct treatise The Elements of Content Strategy, talks about content strategy as a descendent of the fields of publishing, museum curation, marketing, and information science. But these roots only make sense in the wider context of all the other disciplines out of which the Web is spun.

Our challenge and promise to one another

So in answer to Dan B., and as my contribution to the LinkedIn discussion, I say that although there is indeed overlap in our areas of interest, content strategy deals directly with the substance—the content—which the other disciplines help make usable and engaging. We worry about the business strategy the content is meant to fulfill, about how the organization is going to create and manage all that content, and about how the organization will maintain a clear and flexible control over the content’s lifecycle.

We must not allow our desire to draw distinctions for our own understanding to hamper our recognition of each others’ perspectives and contributions. In a few years, there will undoubtedly be new discoveries that lead us WWWWorkers to define new disciplines, but they will take nothing away from all the disciplines and wisdom that we exercise now.

Content Modeling is more than “fields”

When content management folk talk about “content modeling,” they are usually referring to the process of building templates for a CMS.  Besides the Content Management Bible by Bob Boiko, which is a great place to see how a lot of CMSes work, I found a series of excellent overviews of the discipline by Deane Barker of Blend Interactive, Inc., at Gadgetopia.

Barker says:

“Content modeling is the process of converting logical content concepts into content types, attributes, and datatypes.”

In academia, you can find inscrutably technical research on content modeling as the process of identifying the structure of documents algorithmically. (This gem from MIT scintillates! Content Modeling Using Latent Permutations, by Chen, Branavan, Barzilay, and Karger. 2009.)

But if that’s what is meant by “content modeling,” then there are essential aspects missing.

As content strategists, we face this technical view all the time, which I believe is descended from IT disciplines like “data modeling” for database design. We come on the scene talking about content purpose and process, and technologists ask us for template requirements, metadata fields, and data types. In these days of XML standards and the quest for the Holy Semantic Web, we find ourselves pushed into the thick of technical specification before we’ve had a chance to imagine what the content is supposed to be and do, let alone how it should be structured.

Returning to art

In my view, we’d be nearer the truth of “modeling” if we took our cues from other disciplines:

  • When a painter undertakes a monumental work of art, she doesn’t just run in with paintbrushes blazing. She sketches from life. She does études. She makes early decisions about what works and what doesn’t.
  • Murals often begin as drawings in miniature, which are enlarged to scale, then transferred to the wall.
  • The sculptor “models” in clay before casting in bronze.
  • The industrial designer creates digital “models” before production.
  • Developers create prototypes (just “models” by another name) before turning the coders loose.

Models serve as demonstration and instruction to the producers, the assistants, and the artists themselves. They remind and guide. They provide format and boundaries to inspire greater creativity.

Content must be modeled in this creative sense, as well as in the technical sense.

Some suggestions for modeling

  • Banish the “basic page” from your content types. The “webpage” is the content parallel to the “miscellaneous” category in information architecture. Far from being your standard content type, it should be your very last resort.
  • Ask the simple questions. Why are we creating this content form? What are people supposed to do with it? What does that mean for the other kinds of content we produce? How can they be combined into content “super-types?”
  • Do some content studies and sketches. Before you define technical requirements, spend time whipping up some real content to see how it behaves in your domain. If you already have content, gauge the consistency of its form from one piece to the next.
  • Test the usability of your content. Like a user interface, you should see whether people can actually use your content in the way it was intended. Do they get from it what you hoped they would?
  • Define the “rules” for each content type. You’re establishing conventions for the content creators, so they know what they’re doing, and so they can do it consistently over time.

By modeling your content in the artistic sense—by setting the forms and boundaries even before the content is “designed”—all the technical content management exigencies, like “fields” and “data types,” are set in their proper perspective. Templates are simply the mold into which your material is poured and out of which the sculpture emerges, fully formed.

Scrummy Content in an Agile World

Rachel Lovinger of Razorfish began a conversation recently about adopting Agile/Scrum software development practices for content strategy, and as synchronicity would have it, I am, e’en now, working on helping my team explore Scrum for our migration to Drupal. Rachel’s articulation—and subsequent posters’ agreements—of the challenges of working with CMS builders in Agile mode to craft content and system simultaneously has raised for me a few thoughts.

Waterfall Content: Must we adopt the monolith?

Traditionally, software systems have been built by the “waterfall” method. An elder child of “scientific management,” the waterfall sees development as a long, linear sequence of requirements gathering, followed by design, followed by building and coding, and wrapped up with testing before release. Without going into a complete review of the difficulties with this method, its underlying principles suppose that it is possible to foresee all the features and details of a system in advance, so that no change to the design should be required (or indeed allowed) after development has begun.

All the laborers in the process, from the “overseer” project manager to the lowliest coder, must build their part of the plan, and then hand it off to the next phase of the waterfall. Any change to the “master plan” is considered a threat to the success of the project, so a serious set of blinders must be kept in place to preserve the sense of moving forward. By the time the system is delivered, even if it conforms perfectly to the original specifications, it may not turn out to serve its users very well. The waterfall makes no allowance for learning, for creative discovery, or for unforeseen circumstances.

Now, since I’ve become acquainted with content strategy (now about two years, and still a noob!), I have got the impression that large content projects are often done in the same way: Gather requirements, design content models, build templates, create content, and deliver.

And why not? When you’re embroiled in a monolithic CMS project, if any thought has been given to content at all, the system won’t be ready for it until late, late, late in the waterfall. Content types will have been fixed in data capture templates. Layouts will have fused with information architecture. Workflows will have been encoded deep in the CMS DNA. At best, content strategy becomes a parallel cascade that will merge (eventually) with the technical cascade, as the whole project rushes to the sea.

Challenges to Content Agility

Let me draw out some of the themes from the e-mail conversation to which I referred earlier.

I can’t design content in iterative cycles without the “big picture.”

It is a natural panic when abandoning the waterfall, that if we can’t know all the requirements, all the specifications up front, then ipso facto, someone must be expecting us to work without any of them. I’ve heard this cry regularly from waterfall practitioners in my own organization: “But you have to have some idea up front of what you’re trying build!” Yes, absolutely. You have to have “some idea.” You have to have a very clear idea, a rich idea, in fact, a vivid idea of the final product. We don’t, however, need a perfectly detailed idea of it.

The starting point for the Agile/Scrum is an enormous, rich “product backlog,” which includes all the features and attributes that could possibly be included in the final product, captured as “user stories.” These user stories come directly out of strategy: business strategy, market strategy, content strategy, and technology strategy. The whole Team has access to that prioritized backlog from the beginning. Even if the Team is working on specific stories during a sprint, they can infer a general design from any important perspective from the entire backlog in order to guide their work during the sprint.

We use “agile” within the context of a larger “waterfall.”

It is tempting in a disciplined field like technology development to think that methods, like software components, can be disassembled and reconfigured in other combinations, thereby achieving the best of all possible worlds. Of course, “Agile” was first a set of principles set out in a manifesto before it collected any specific techniques. (agilemanifesto.org)

The Agile Manifesto and its extensions stand in direct opposition to the waterfall:

  1. Individuals and interactions over processes and tools
  2. Working software over comprehensive documentation
  3. Customer collaboration over contract negotiation
  4. Responding to change over following a plan

These principles were articulated in this way because the latter of each pair was considered a principal impediment to the former: Process deadens individual interactions. Documentation delays completed work. Contract negotiation undermines collaboration. Rigid plans shatter in the face of change.

So in my view, the fundamental philosophies of the two approaches are direct opponents. I don’t think you can do both to any real benefit.

Specialists sit idle if they aren’t involved in the stories in this sprint.

Agile refuses to believe that work must be sequential. Each sprint is planned according to the prioritized backlog, the needed skills to fulfill the user stories, and the available resources. A fully-utilized Team should be able from the full range of stories to select enough work to keep everyone fully engaged during the sprint, and it’s the responsibility of each member to estimate his or her capacity. But at the same time, the Team doesn’t have to limit itself to the current sprint backlog.

In addition, a sprint is simply a “timebox” into which the Team plans its work. Saying that you have “potentially shippable software” by sprint’s end doesn’t mean that you don’t work on anything but the stories to be completed during the sprint. In fact, the Team must always be looking ahead to future sprints, so that if there is “prework” to do before future stories are selected, the Team needs to plan when and how that work will be done within the sprints.

Because some work is dependent on other work, someone will get left at the end with too many tasks and not enough time.

Part of the Scrum philosophy is that the whole Team is responsible for the success of the whole. It’s a team issue, rather than a methodology issue if the work is not evenly distributed, or if the Team is in danger of missing its commitments. It is also important to expect that over time the Team get a better sense of its own process and relationships. One sprint or two of someone getting left holding the bag should be enough to raise a flag to the Scrummaster: Something’s not working, and that’s not the fault of Agile/Scrum, but of the Team’s ability to select and complete an appropriate number of user stories for a given sprint.

Agile is too loose to “manage” large content projects successfully.

I have also heard it said that Agile development would work if human beings and their projects were more predictable. Also, I have heard that the larger the project the more important hierarchy and sequence becomes. These are exactly the myths of “scientific management” and its child, the “waterfall.”

  • A large system can be controlled with the proper structure and oversight.
  • If they work hard enough, people can be predictable.
  • Error and inefficiency are the results of carelessness and poor planning.

It is counterintuitive, but true nonetheless, that waterfall approach cannot work as long as any human beings are involved. We are inherently unpredictable. We learn. We communicate ineffectively and incompletely. We make mistakes. We fail. We discover. We invent. We play.

Agile and Scrum accept the ultimate truth that no system can be controlled or predicted as long as human beings are involved. Instead, by breaking the work into short bursts, emphasizing conversation and regular inspection over the course of iterative and incremental work, and accepting change as inevitable, it becomes possible to:

  • Minimize the long-term impact of inevitable mistakes.
  • Take full advantage of humans’ creativity and inventiveness.
  • Move forward without being paralyzed by uncertainty.

Your Platform Needs to be Agile, Too

As a final comment, as I posted to the e-mail conversation, whether you have the freedom to be Agile depends a lot on the CMS you’re using. A monolithic platform that was designed to be implemented according to the waterfall—get all the requirements together, build it, launch it, and don’t make any changes—can absolutely kill any flexibility that an Agile approach might have afforded. It’s well and good to say that your CMS developers are using Agile techniques, but if it requires tons of rework to change the content templates because you learned something later on that shifted your modeling, then you’re really still working under the waterfall.

On the other hand, a system like Drupal, can handle lots of global change as the sprints progress, so it actually supports the Agile process better. For more idea of how that works, look for presentations by Rob Purdie at The Economist.