I’ve heard of several teams programming in mobs using the concept of the mobodoro. I believe in the notion of parallel development, so I’m not going to claim I am the sole inventor or coiner of the term, though I believe I introduced it at my current department. Instead, I want to share my personal discovery of the mobodoro and how it developed in the mobs I’ve worked with.
I’m going to break down my experience and practice in order to make it fast to get started, and easy to return and read more later. This post will present everything in the following order:
- How: how to get started
- Why: why I find it useful
- History: how the concept came about and developed over time
I’m presenting my experience this way so anyone reading this can dive right in and try it, even without understanding all of the whys. The whys are next to present heuristics on how to tell whether a mobodoro could be beneficial to the team. Finally, the history may provide insight into both my personal journey, and the kinds of twists and turns you might experience while discovering something new.
How to Implement the Mobodoro Method
To start your team woking in mobodoros, the foundation pieces are the rotation, the retro, and the break. If any of these core pieces are missing, it is unlikely the mobodoro method will stick with the team, or provide maximum benefit.
Simplest Mobodoro Format
The most basic form looks like this:
- Mobodoro duration – 25 minutes
- Rotation of driver/navigator
- driver and navigator roles are the same as standard mob programming
- the length of time for any one person is (25 minutes)/(# of mobbers)
- length example: 5 mobbers, 25 minutes, rotations occur every 5 minutes
- people seem to settle into about 3 minute rotations as a “sweet spot”
- a short (3 minutes or less) retrospective of the last round
- may be very informal, more of a reflection
- a disciplined break after each 25 minute period
- allows people to take a bio break, rest their eyes, stretch, etc.
- long enough for a reasonable bio break (5-7 minutes seems to work well)
Although this looks like a lot of bullet points, really it’s just an interleaving of the ideas of mob programming, mixed with a pomodoro. The key to this simple form is to develop disciplined practice around rotating, retrospecting, and breaking. With this cadence it becomes resonable to work in a mob with others all day, without feeling too exhausted.
An extra note, by retrospecting every 25 minutes, stakes tend to lower and conversations become more natural over time. There is little time for issues to linger, and become bigger sticking points, meaning the developers working together can find solutions to small challenges more easily.
Experimental Mobodoro Formats
Once the discipline around rotations, retros, and breaks has been developed, and people feel confident with the tools, it’s reasonable to start experimenting with the format. The critical piece, which must stay in place is the rotation -> retro -> break, in order to provide a familiar structure which reinforces what people have learned.
Experiments are best done one at a time, in order to identify whether they are affecting the change you are looking for. The most common experiments I’ve seen or been a part of are rotation time length, cycle time length (all rotations completed for a single mobbing cycle), rotation count length, and integration of other things into the cycle which must be done periodically, like email.
So far my personal favorite experimental format is the following:
- 15 minute cycle time
- 1 minute rotations (3-5 mobbers)
- 1 minute retro each cycle
- break every other cycle
- check email after break
This means, with 5 mobbers, each mobber will be at the keyboard 3 times per cycle, a retro will occur every 15 minutes, and a break is taken every 30 minutes.
It is worth noting, this format was discovered through running experiments, rather than simply jumping right into it. Often people who are used to 5-10 minute rotations find 1 minute rotations to be far too fast. On the other hand someone who is finding 3 minute rotations too long might get benefit from shortening the rotation time.
Automating The Mobodoro
Automating the mobodoro format is done to offload things the team must remember. If you are already using a mob timer to keep rotations moving along, you are most of the way there. Some timers are a little more primitive, and may require some creative setup to automate your preferred format. Others, like Mobster, have features which allow you to set length of rotation, break interval, and the like.
The automation I have seen be most successful tends to involve integrating non-human elements into the timer so it periodically reminds the team to do something. This kind of reminder is great for things like email, committing code, running tests, etc. I fid that other things like “check your calendar before starting, at lunch, and at EOD” tends not to integrate well.
Why Use the Mobodoro Method
I have found the mobodoro method is best suited to the situation where you want one or more of the following to improve:
- Identify improvements quickly and iteratively
- Improve team cohesion through regular communication
- Deliver higher value software through constant collaboration
- Ingrain disciplined practice into the team
- Build the retrospective muscle
- Develop emotional endurance
Let’s take a look at each of these heuristics and what they mean:
Identify Improvements Quickly and Iteratively
It’s common for teams to retrospect once a week, or even once a sprint. Though these retrospectives may provide value, the turnaround on experiments tends to be slow, and the indicators tend to trail. This makes it difficult to make fast changes and observe results at the speed of work. By introducing retrospectives into the course of daily work, the immediate team can find, troubleshoot and eliminate challenges which might have been plaguing them.
Improve Team Cohesion Through Regular Communication
Short rotation cycles put developers in a position where they must learn to communicate ideas clearly, and succinctly. This focus on clear communication, coupled with high-frequency retrospectives provides the foundation for team members to learn how to communicate with each other well. The repeated process of retrospecting also encourages regular, thoughtful communication. That communication improvement is the bedrock for effective collaboration.
Deliver Higher Value Software Through Constant Collaboration
Teams deliver high value software when they collaborate. The closer they get to constant collaboration, the value of the software created increases to the point where the primary hindrance is whether the next most important work is actually the most valuable to the stakeholders.
The mobodoro method provides a framework from which we can hang our collaboration. With a focus on rapid rotations, frequent retrospectives and constant experimentation, it becomes easier to collaborate effectively, constantly.
Ingrain Disciplined Practice Into the Team
Disciplined practice is anything which is done intentionally on a regular interval. The mobodoro method focuses heavily on disciplined retrospectives, and disciplined breaks. Disciplined retrospectives will help the team be more effective in the retrospectives which are part of the larger conversation. Disciplined breaks make it easier for the team to build up endurance for the emotional work which needs to be done in constantly collaborating with each other.
Build the Retrospective Muscle
Since retrospectives are, commonly, an infrequent event – separated by weeks – it takes a long time to build healthy skill in being part of an effective retrospective. The more frequently people are exposed to retrospectives, the faster they will build the “muscle” to be a part of, and even facilitate, an effective retrospective. This means, by having lots of small retrospectives, the big retrospectives will actually improve!
Develop Emotional Endurance
Constant collaboration requires empathy and thoughtful interaction. Working this way can be exhausting, especially for people who need time alone to recharge. By introducing disciplined breaks, it provides people with a safe space to recharge momentarily. This recharge will help people develop more emotional endurance, leading to healthier collaboration.
This is not a comprehensive list of heuristics, but these are the things I have found to improve most when I work in a mobodoro style with teammates. It is also worth noting that, if none of these sound like issues you need to resolve in your mob, perhaps the mobodoro method isn’t right for your team. Likewise, if you find that working in a mobodoro style isn’t conferring any benefit, you may want to replace it with a different pattern.
All of this said, if working in a mobodoro style makes something painful, you have discovered a problem you and your teammates may not have been aware of. Try leaning into the pain and look for the cause!
The Earliest Experiment
The earliest experiment I ran on mixing mob programming and the pomodoro technique was centered around the fact that people were feeling very frustrated by trying to communicate their ideas to each other. In order to help ease the pain, we tried running 25 minutes worth of keyboard time for each mobber, so they could explore, explain, and share.
This short lived experiment led to poor communication, disengagement, and a cowboy attitude toward developing software. Ultimately, the experiment was abandoned, but not forgotten.
The Before Times
About a year after the earliest experiment, I found myself in a different team with different problems. We were struggling to communicate, much like before, but instead of just having communication breakdowns, we actually were fighting over control of the direction. We were communicating only the smallest of behaviors with each other, and the overall direction was not obvious. Moreover, the disengagement during each rotation was very high.
Willem (Larsen) was one of the people on this team, and he suggested shortening our rotation time from 7 to 5 minutes. Just removing the two minutes from our rotation time reduced the disengagement, but increased the pain around communication. Jason (Kerney) and I found it difficult to convey our ideas in 5 minutes, and were often frustrated by the reduced time for navigation.
At one point, we decided to start doing regular, short retrospectives. The more we talked, the easier it was to collaborate. At this point, we started using the mob timer to automate remembering to retrospect regularly.
This is when the pieces fell in place for me. I suggested that we subdivide 25 minutes between the three of us, and aim for a pomodoro style work/break behavior.
The Early Age of the Mobodoro
Once we put the mobodoro structure into place, we started experimenting with identifying work we needed to do aside from writing code. We started integrating time for breaks, retrospectives, email checking, schedule checking, code commits, and more.
During this time, we experimented with the amount of time we spent on each rotation, the length a break needs to be to allow for bathroom use, getting coffee, or anything else someone might need to do before jumping back into work.
The Middle Era
Our rotation time began to settle at 3 minutes. We also started introducing retrospective questions into the mix, which we found most useful for surfacing information, and ideas. As we settled into a formula which worked for us, we started focusing on other, more pressing problems.
I believe, this was also when we started to feel like we were stagnating. Changes weren’t as drastic, and they come less frequently. We found ourselves focusing more on the code, and technical issues. This shift away from process improvement made the retrospectives feel lackluster, and uninspired.
We had become highly effective, but we needed new inspiration.
The Current Era
Willem and I had started developing a hypothesis on what the minimum rotation time might be. My theory then, and now, is that time at the keyboard must be longer than the time spent rotating the new mobber into position. We decided that the next interesting rotation time experiment could be a 1 minute rotation.
Some time later, I had an opportunity to try out the 1 minute rotation – sadly, without Willem or Jason. Something interesting happened: we stopped writing code and started talking about it, instead. We discussed what we wanted to do, why it was valuable, and what we might be overlooking.
This new experiment was run during a training exercise where multiple mobs were all doing the same thing. When we broke from the exercise and returned to reflect with the rest of the group, the mob I was in actually produced the same amount of work as the other high-performing group. The difference was, our group spent most of our time discussing what we wanted to accomplish and the direction to take next. The other group hunkered down and just hammered the code.
This was telling since we spent far less time actually writing code, and more time discussing, but we were actually able to produce the same result, at the same quality, as the other group who didn’t prioritize communication.
I realized right then that no matter how many – or few – people are in a group, writing code as fast and furiously as possible, it’s not more valuable than the conversation which happens between the people who are working together. More than that, when the communication between members increases and improves, everyone actually contributes at the same level!
I’m still experimenting with ways to improve mobbing with others, and challenging what I believe is true about the mobodoro method. All of the observations in this post are retrospective in nature, and may only be part of a larger story. Nevertheless, the most important aspects I find to hold true are the value of communication, collaboration, and frequent experimentation.
I started writing a long, wordy post all about ECO mapping, how it works, why you should use id, and the entire process of generating the map from the top, down. It was WAY too much.
Instead, I want to introduce you to ECO mapping from the bottom. I’ll follow Kurt Vonnegut’s advice and start as close to the end of the story as I can.
ECO mapping is a way of breaking down work you are planning to do. The goal of an ECO map is to understand exactly ONE THING the user is going to do, identify the things the software must do to respond to the user action, and understand the outcomes which can arise.
What is ECO Mapping?
ECO mapping (pronounced ee-koh mapping) is the process of prototyping a thin slice of a software solution, without code, to identify areas of concern and allow for early discovery of work which needs more understanding before implementation can begin.
What ECO Mapping Is Not
ECO mapping is not the entire process of discovery. There is no replacement for talking with customers and stakeholders. Tools like Event Storming, user studies, personas, and more, can facilitate better conversation, and surface crucial pieces of the problem which need attention.
The E, C, and O of Mapping
The acronym ECO stands for event, commands, and outcomes. These three ideas are the core for understanding and discussing the next thin slice of software which must be built.
In an ECO map, events are anything which triggers a behavior inside the system from the outside. Any time a user interacts with a text field, or clicks a button, it is an event. In much the same way, an action from an outside system which interacts with the system you are building is also an action.
Examples of events:
- A button click
- An HTTP request
- A message from a message queue
The one thing which hasn’t been highlighted yet is intent. No event is terribly interesting without some intent. No event is triggered without reason, even if the reason is simply because the button says “click me.”
Each event should be described with regard to the intent and context:
- The user clicks the button to generate a report
- The user requests a book list (HTTP get request)
- A data read result was returned through the message queue
By associating the triggering event with intent we have insight into what the expected outcomes will be, which will help us build software which is better suited to meet the user needs.
Commands are the “thing” your software will do in response to an event. This language comes straight out of Event Storming and holds much of the same meaning.
There are a couple of important rules around commands: they must represent some discrete behavior the program will do, and any command can fail. We will explore this more in the outcomes section.
The goal of identifying commands is to start identifying and visualizing the parts of the system which will be exercised when an external event occurs.
When beginning to build your ECO map, it is fine to simply identify a command at a high level. High-level commands can be things like “gets data from the database and returns it.” Often, this high-level command will help trigger thoughts around the smaller, distinct parts of the system which will run due to a triggering event.
Outcomes are what happens when a command is run. These outcomes may be success messaging, data retrieval, or, possibly, errors. Since any command can fail, errors are critical to understanding what will happen within the system.
With each triggering event, a user is expecting some desired outcome. Two things may happen. Either the user gets what the system can provide, or they are given information on why the system can’t complete the request.
I make a distinction between what the user desires and what the system can provide only because some systems don’t have the means to provide what the user actually wants. These constraints can be design, or technical in nature, but it is important to understand the kinds of constraints you will encounter as you build a new execution path.
Often, as we start identifying outcomes, it becomes clear that new commands may need to be drawn out. This is okay. The goal of an ECO map is to generate discovery and discussion. You’ll know when you’re done generating outcomes when ideas don’t bubble out without a struggle.
An important note: DO NOT force yourself to imagine every possible outcome. It will, likely, lead to analysis paralysis, and you will never be able to guess every possible scenario. If you identify errors which readily jump to mind, you will have a robust enough list of outcomes to begin driving development.
Building an ECO Map
It is worth noting on the outset, if you have not explored the domain for your problem before jumping into ECO mapping, you may find this process difficult to complete. Consider starting work by having a discussion with your product owner, stakeholders, customers, etc. to uncover the problem which needs to be solved. Often a tool like Event Storming can help. It is also worthwhile to have a storyboard of user interactions. Understanding the kinds of actions a user can take will make it easier to drive your ECO map forward.
For the sake of this discussion, let’s use the classic to do list to explore ECO mapping. Our to do list will communicate over HTTP and write data to the database, which will add just enough complexity to understand how to slice your work.
Picking an Event
ECO mapping assumes that any event and interaction is, largely, disconnected from another, we can choose any triggering event to begin.
Let’s pick the “add to do” button click user event to trigger action in the system. Your map would look like this:
Here are my assumptions by picking this triggering event:
- The to do item form exists, with an “add” button
- Styling and other visual concerns are either complete, or underway
- We have a place to put the new item on the page
In the grand scheme of things, this is a really small set of assumptions.
Let’s see what the commands would look like considering the selected event:
- Validate user input
- Send new to do item to server/API
- Display result from server
- Server Side
- Validate user input
- Write to do item to database
- Send result to requestor
We can see there are roughly six high-level commands which must be written to accomplish the task at hand. These commands represent distinct behaviors which the software must do.
Now that we have an event, and commands, let’s have a look at outcomes. Each command will have at least one outcome. Any command can fail. The failure condition is also an outcome. Let’s take a look:
- Validate user input
- Input okay
- Input not okay
- Send new to do item to server/API
- Request sent
- Request failed
- Display result from server
- Display new to do item
- Display error result
- Validate user input
- Server Side
- Validate user input
- Input okay
- Input not okay
- Write to do item to database
- Write is successful
- Write failed
- Send result to requestor
- Send success status
- Send error status
- Validate user input
Outcomes Driving Commands
The outcome from one command may be the triggering event for another command. In our to do case, if user input fails to validate, we need to handle the fallout somehow.
We wouldn’t have uncovered this without our initial ECO map. Below is an updated ECO map with our validation concerns highlighted.
The updates include the following:
- Client side
- Display validation error
- Server side
- Respond with validation error
Test Driven Development
ECO mapping provides a framework for identifying events, commands and outcomes. These concepts flow directly through the idea captured in Given/When/Then and Arrange/Act/Assert. By following the chain of events, commands and outcomes, we have a perfect vision into the tests we must write.
In our “to do” application thin slice, I can see a total of about 14 tests which must be written to cover the cases I can anticipate. This means I can read the test cases directly from the map and use them to drive the final implementation, and design.
ECO mapping also highlights the power of collaboration on a team. The more team members collaborate as the ECO map is developed, the more likely the team is to uncover better ways to build software together. By collaborating on the map, we end up collaborating on the tests, which leads to collaborative coding.
Indu Alagarsamy mentioned that sticky notes are cheaper than code. ECO maps capitalize on this notion by providing a fast (5-10 minutes), cheap way of doing software thin-slicing and development discovery.
By getting together with the team and developing an ECO map together, you get quick feedback, and you also generate healthy conversation about work which must be done, and the discovery which underpins potential unknowns. Test cases emerge directly from the ECO mapping process, making it easier to test-drive the solution. This means higher quality software becomes more attainable.
Michael Feathers defines legacy code as code without tests. This means code written years ago, with a good test harness, is not legacy code. It also means the code written yesterday, without tests, IS legacy code.
We don’t need to dig very deep into this to understand what is happening here. Code which has tests is going to be easier on the nerves to change than code without. If we dig a little deeper, code with descriptive tests actually documents context and meaning for the code under test.
It’s very common, even as TDD is continuing to gain popularity, to encounter legacy code. It’s a common response to want to remove legacy code and replace it with something new. Generally speaking it is unwise to do this.
There are two scenarios which arise around legacy code, adding new features, and updating old code. Trying to fit both of these topics into a single discussion is too much for my simple mind to attempt, so let’s talk adding new features!
When you add features to a legacy codebase, there are three things you will want to keep in mind. I even have a fun little mnemonic for you: TIP.
- Test expectations first
- Integrate late
- Pure behaviors by default
We will examine these three ideas and how they make adding new features a more reasonable request. Mind you, legacy code is a tough problem so, this is a guide, but not an absolute. You will always need to use your best judgement to assess your particular situation.
So, let’s have a look at the TIP approach.
Test Expectations First
New features may be big or small, but either way, it is important that you get a good feel for the expectations stakeholders have around the feature you’ll be developing. The most effective approach to gathering this information is to have conversations. Lots of them. At the very least you should probably talk to people about the problem you are solving as much of the time as you are writing code, but that’s a different discussion.
When you and your team start approaching a story, the user story is the beginning of the conversation, not the end. Be ready to take lots of notes. Draw pictures. Identify the kinds of behaviors which are expected in the system. For an especially robust conversation, try using event storming to gather insights.
Once you have all of your expectations captured, you are ready to start iterating on your solution. It’s important to understand that your solution is almost guaranteed to require iterations. It is entirely likely that you did not capture all of the information available in the first conversation.
Before you write a single line of code, write a test. Capture some behavioral expectation in that test and decide how you want to interact with the code you’re getting ready to write.
This test should reflect an initial state of the system, the event that triggers your new behavior, and the outcome of that behavior. There are a few different ways to capture this, including the classics: Arrange/Act/Assert and Given/When/Then. Regardless of the test format you choose, be sure you test discrete expectations and cover the cases you are aware of. Use each new test as your North Star, guiding your development efforts.
You’ll note we spent a lot of time talking about communication in this section. The reason for this is, the only way to uncover expectations is to communicate with the people who hold information about the desired outcome. Often they will forget to share something you would consider critical. As a developer, it is crucial you develop the skill of surfacing those important details, as they will be the signposts to building a well-aligned solution.
I received some questions and I wanted to provide direct insight. This integration is NOT with regard to the practice of continuous integration (CI). Keeping code outside of your CI pipeline can lead to tremendous challenges and pain.
Instead it can be viewed as code which exists along-side the rest of the working software source, under test. The integration is simply the introduction into the user-accessible flow of the application. Consider late integration in this case as an airgapped feature.
New features, regardless of where you are in the product lifecycle, go through a process of discovery, development, and iteration. All of this is best done outside the flow of the current system. Ideally, the current software is in production and providing value to users. We want to cause as little disruption as possible to the current software as we introduce new behaviors.
When working in a legacy system, the idea of working outside of the primary released software is even more important since there is a lot of risk associated with modifying existing code. Often, even small changes in a legacy system have wide-reaching consequences, so care is critical.
It’s common practice to introduce feature toggling into systems in order to cordon off new development work from the eyes of the user. This protects the user from accidentally stumbling into a feature which is incomplete and, possibly, unstable.
In a legacy system the feature toggle is not a conditional behavior. Instead we can view integration into the system as our feature toggle. By developing code which is not reachable, by any means, from the main application, we protect our new development efforts and the user who might interact with something that could lead to an unrecoverable situation.
Integrating late, then, is waiting until the point in time where you feel confident that the work you have done is at a point that, at least, the stakeholders could interact with it and provide feedback. This airgap provides safety around the changes you make and enables the company to continue providing value in the software without breaking customer expectation.
Pure Behaviors by Default
We can look to functional programming and get a sense, immediately, of what a pure behavior might be. For our purposes, we can consider a pure behavior to be a behavior which performs a data-in, data-out action without interacting with external systems or maintaining state.
Business logic can be largely characterized by our definition. Business rules can be stated as “if x, then do y.” This means we can describe most of the business concerns through pure behaviors, and test them accordingly.
If we write the majority of our new feature as a collection of pure behaviors, we will be able to test most of it without even concerning ourselves with the inner workings of the rest of the system.
It is worth noting, by creating new, pure behaviors, we may end up duplicating code which exists elsewhere in the system. This is fine, since we can always refactor later. It’s important in the refactoring that we be mindful of keeping pure behaviors by default, since this is our path out again.
Since pure behaviors are comparatively easier to test than behaviors embedded deep inside a legacy codebase, this approach will actually create a new positive feedback loop where others have an example of a testing methodology that is easy to follow and have success with.
Folding it All Together
Although this approach is not the grand unifying solution for all legacy code woes, it provides a means to start providing new value in a system which might, otherwise, be difficult, or impossible to work with otherwise.
If we look at the entire TIP methodology, we can see it bundles the classic TDD approach of test-first, a healthy practice of reducing coupling between program elements, and the descriptive quality of well-scoped pure functions. By working within the TIP structure, each part of the new feature development process builds upon the new, healthy codebase we created, meaning this is a self-reinforcing loop we can rely on.
Of course this method of approaching a legacy codebase continues to rely on good XP practice including sharing knowledge, refactoring, tests, automation, etc. Instead of viewing the TIP approach as a standalone practice, consider it a part of the process of integrating new, healthier practices into a codebase which makes change hard.
There are many people who would likely say I don’t handle some social sitations with the utmost grace, and they wouldn’t be wrong. I’m human and my emotions can get the better of me more often than I would like. Nevetheless there are interpersonal values I hold dear as I work with other people in the day-to-day.
During the time I worked with Jason Kerney and Willem Larsen, we sunk a fair amount of effort into discovering better ways to work together. Over time we ran many experiments and had varied results.
Side note – if you never have failed experiments, were they experiments? Perhaps this is another post.
Anyway, we had experiments that succeeded and experiments that failed. For software developers successes can feel hollow when they are not part of a visible feature, and failures can feel like a blow to the ego.
What kept our team afloat through the roller coaster ride were the values we held. Since we held our values more closely than any one success or failure, the successes felt less thankless and the failures felt more educational. Ultimately, we made the entire process about people, and this is what shaped the values which I still consider core to healthy interactions with my coworkers.
The values we landed on before we all went our separate ways (work-wise – we still talk) are as follows:
- People over code
- Tensions over problems over solutions
- Acceptance before alternatives
- Leadership through expertise
- Guide, don’t dictate
I shared a photo of the note cards I keep on Twitter and people felt a little baffled by the meaning of items on the list. This post is my way of expanding the pithy aphorisms and making them digestible by people who are outside of our immediate circle.
People Over Code
In any work I do with another developer, I always prefer to consider the person I am working with over the code we are writing. I am more willing to throw away code I have written than throw away the relationship I am building with my coworker. Their trust is worth more to me than any disputed chunk of code.
Tensions Over Problems Over Solutions
This one may seem a little more cryptic on the outset, but it actually follows pretty directly from the first value. When discussing an issue, prefer to discuss the feeling you have (tension) regarding the issue at hand. If you aren’t able to provide insight into the feeling, then an example of the problem you’re encountering may be appropriate. If the previous options are not satisfactory, then, and only then, offer a solution.
An example of surfacing a tension might look like the following:
“I understand, mechanically, what this code is doing, but I really don’t know why. How does this relate to the bigger task we are working on?”
There is no specific problem here. There is no statement that there is a specific “problem” with the code, there is just a feeling that some understanding is missing.
If surfacing a tension isn’t enough, sometimes a problem will help solidify the idea.
“As we look at this method, we have to scroll past the edge of the screen to see everything and I lose track of the task context. Is there something we could do to make this better?”
This is more specific about a particular problem. Though this could be great if the concrete example is the only issue you have, but if it is merely an example, people might try to solve this problem without thinking about the bigger picture tension you have.
Sometimes people need a nudge to get their problem solving brains in gear. This is where solutions come in. It’s important to note, solutions are a last resort, not a first go-to.
“This method is really long, I’d like to break it into smaller methods. What do you think?”
You’ll note, this still leaves room for others to provide thier thoughts, but it offers a solution which people might be able to follow if nothing better comes along.
Acceptance Before Alternatives
Acceptance before alternatives comes directly from the improv idea of “yes, and.” The aim with acceptance before alternatives is you listen and accept that the person you are talking with is bringing something to the table. Once they have said their piece, start by accepting it.
“I see what you’re saying. That method is really long, and it’s hard to see on one screen.”
Then, if you disagree with the proposed idea, you can provide an alternative.
“I understand you might want to put in folding regions* for the code there, would you be open to extracting methods instead?”
By offering an alternative this way, people are more likely to feel heard and understood. The discussion becomes less about ego and more about the advantages and disadvantages of a given approach.
- Folding regions are a way to tell some editors that a chunk of code goes together and can be folded to allow other code to fit on the screen.
Leadership through expertise
This value is a bit more fiddly than the previous. Though the words mean, by dictionary definition, what you might assume, together it’s important to understand that this is not meant as a club.
Leadership through expertise comes from the notion of a civil anarchist state. If we assume there is a power dynamic between people on a team, but there is no structured governing body, the team is guided exclusively through a shared desire to produce a software solution.
This means the person with the most expertise in a particular area can provide leadership through service to the rest of the team. No one person will be an expert in everything and each person is likely to have more expertise in a topic than anyone else.
If we view “leadership through expertise” through this lens, then leadership is a servant position and expertise is the means of service to the team. Rather than being an appeal to authority, it is a way to facilitate guidance.
Guide, Don’t Dictate
Guide, don’t dictate is the last value and it largely provides the way in which people can work together. As the “expert” title moves from person to person in the group, the way to smooth the handoff is through guidance first. If the expert is leading and leading is serving, then guidance is the tool they use to serve the team as others work toward completion of a task.
Each of these values, ultimately, is centered around the idea that I work with people and people are what drives the work. All the values do, for me, is provide me with a path toward better, healthier interaction with my coworkers. Hopefully you find these values useful and, perhaps, create some of your own.
I remember a time, long ago, when WordPress was a small, scrappy piece of software which was dedicated primarily to the publishing of blogs and basic site content. It wasn’t the most fully-featured CMS on the planet, but it worked well enough. It was fast enough and flexible enough, so people, including me, used it.
Over time I have noticed my site getting slower and slower. I looked at the database and there was nothing strange happening there. I checked the installation and ensured something hadn’t broken. Ultimately, WordPress is just kind of slow anymore. It’s fine for people who are not comfortable with writing their own HTML and/or using the command line, but I just couldn’t deal anymore.
The most important realization I had was: my blog, like most blogs, is effectively static content. Nothing on the web loads faster than a single HTML file loaded from the filesystem. This means, my site is likely to see the greatest performance improvement from converting to an entirely static content system.
I knew about Jekyll and I had used it before, but I looked around before diving in. There are several different packages including Jekyll, Hugo, and Gatsby which were my final three.
Ultimately, I was concerned about Gatsby because it is all tied together with React so I rejected it because: 1. I detest Facebook and don’t want to even be peripherally associated with any technology they control; 2. I looked at examples and it looked like people were actually using single page applications to serve their blogs. I’m sure that not every site is served as a React app since there is a server-side rendering system for React, but the choice just didn’t instill a sense of confidence.
This left Hugo and Jekyll. Both are static site generators which use template engines. Both have a fairly strong following. Both seemed to be completely acceptable options. The one and only thing that I felt uncomfortable with regarding Hugo was their template system actually felt MASSIVE and kind of confusing. I don’t want the infrastructure of my blog to become a separate hobby.
Ultimately, I leaned into what I knew and stuck with Jekyll. So far I have had no regrets.
The Mechanics of Conversion
I’ll be totally honest here, I only had a couple of pages so I simply copy/pasted the page content, made small edits, and moved on with my day. If you have a lot of pages, this will not be a good solution since you could end up copying and pasting for days, or weeks.
The really interesting part was converting my blog posts, which numbered over 100 in total. As I started to review the posts and what needed to be done to convert them, I knew I needed a script of some sort. What I ended up creating was a ~100 line conversion script with a little external configuration JSON file:
Using this script is as simple as editing your configuration file and then running the script. By default, the conversion script will write blog posts to the _posts directory. This means you should be able to run the script, rebuild your site and everything should be set to go.
Step 1: Export your WordPress blog posts
In your WordPress site, go to settings and choose the export option. WordPress has an export behavior which builds an RSS feed document by default. This is what we want.
Don’t try to be tricky with this or things could be harder later.
Step 2: Save the RSS XML export to your Jekyll project root
Just like the title says. Move the RSS XML document to your Jekyll project root. That’s all.
Step 3: Copy the script and configuration from the gist to your local Jekyll root
Copy the script into a file called
I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.
Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?
As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.
I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.
Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.
Last night I was working on integrating oAuth consumers into Noisophile. This is the first time I had done something like this so I was reading all of the material I could to get the best idea for what I was about to do. I came across a blog post about oAuth and one particular way of managing the information passed back from Twitter and the like.
I’ve been tasked with an interesting problem: encourage the Creative department to migrate away from their current project tracking tool and into Jira. For those of you unfamiliar with Jira, it is a bug tracking tool with a bunch of toys and goodies built in to help keep track of everything from hours to subversion check-in number. From a developer’s point of view, there are more neat things than you could shake a stick at. From an outsider’s perspective, it is a big, complicated and confusing system with more secrets and challenges than one could ever imagine.
My last post was about finding a healthy balance between client- and server-side technology. My friend sent me a link to an article about SEO and Google’s “reasonable surfer” patent. Though the information regarding Google’s methods for identifying and appropriately assessing useful links on a site was interesting, I am quite concerned about what the SEO crowd was encouraging because of this new revelation.
Earlier this year I discussed progressive enhancement, and proposed that a web site should perform the core functions without any frills. Last night I had a discussion with a friend, regarding this very same topic. It came to light that it wasn’t clear where the boundaries should be drawn. Interaction needs to be a blend of server- and client-side technologies.
Since I am an engineer first and a designer second in my job, more often than not the designs you see came from someone else’s comp. Being that I am a designer second, it means that I know just enough about design to be dangerous but not enough to be really effective over the long run.
It’s always great when you have the opportunity to built a site from the ground up. You have opportunities to design things right the first time, and set standards in place for future users, designers and developers alike. These are the good times.
I am big on modularity. There are lots of problems on the web to fix and modularity applies to many of them. A couple of posts ago I talked about content and that it is all built on or made of objects. The benefits from working with objectified content is the ease of updating and the breadth and depth of content that can be added to the site.
Through all of the usability, navigation, design, various user-related laws and a healthy handful of information and hierarchical tricks and skills, something that continues to elude designers and developers is pretty URLs. Mind you, SEO experts would balk at the idea that companies don’t think about using pretty URLs in order to drive search engine placement. There is something else to consider in the meanwhile:
When I wrote my first post about object-oriented content, I was thinking in a rather small scope. I said to myself, “I need content I can place where I need it, but I can edit once and update everything at the same time.” The answer seemed painfully clear: I need objects.
This morning I read a post about wireframes and when they are appropriate. Though I agree, audience is important, it is equally important to hand the correct items to the audience at the right times. This doesn’t mean you shouldn’t create wireframes.
With the advent of Ruby on Rails (RoR or Rails) as well as many of the PHP frameworks available, MVC has become a regular buzzword. Everyone claims they work in an MVC fashion though, much like Agile development, it comes in various flavors and strengths.
How many times have you been on a website and said those very words? You click on a menu item, expecting to have content appear in much the same way everything else did. Then, BANG you get fifteen new browser windows and a host of chirping, talking and other disastrous actions.
There has been a lot of talk about graceful degradation. In the end it can become a lot of lip service. Often people talk a good talk, but when the site hits the web, let’s just say it isn’t too pretty.
Suppose you’ve been tasked with overhauling your company website. This has been the source of dread and panic for creative and engineering teams the world over.
Working closely with the Creative team, as I do, I have the unique opportunity to consider user experience through the life of the project. More than many engineers, I work directly with the user. Developing wireframes, considering information architecture and user experience development all fall within my purview.
I’ve been working on a project for an internal client, which includes linking out to various medical search utilities. One of the sites we are using as a search provider offers pharmacy searches. The site was built on ASP.Net technology, or so I would assume as all the file extensions are ‘aspx.’ I bring this provider up because I was shocked and appalled by their disregard for the users that would be searching.
Some sites, like this one, have a reasonably focused audience. It can become problematic, however, for corporate sites to sort out their users, and lead them to the path of enlightenment. In the worst situations, it may be a little like throwing stones into the dark, hoping to hit a matchstick. In the best, users will wander in and tell you precisely who they are.
I just read a short, relatively old blog post by David Naylor regarding why he believes XML sitemaps are bad. People involved with SEO probably know and recognize the name. I know I did. I have to disagree with his premise, but agree with his argument.
Today, at the time of this writing, Google posted a blog stating they were dropping support for old browsers. They stated:
People are creative. It’s a fact of the state of humanity. People want to make things. It’s built into the human condition. But there is a difference between haphazard creation and focused, goal-oriented development.
When given a task of making search terms and frequetly visited pages more accessible to users, the uninitiated fire and fall back. They leave in their wake, broad, shallow sites with menus and navigtion which look more like weeds than an organized system. Ultimately , these navigation schemes fail to do the one thing they were intended for, enhance findability.
Most content on the web is managed at the page level. Though I cannot say that all systems behave in one specific way, I do know that each system I’ve used behaves precisely like this. Content management systems assume that every new piece of content which is created is going to, ultimately, have a page that is dedicated to that piece of content. Ultimately all content is going to live autonomously on a page. Content, much like web pages, is not an island.
Nothing like a nod to the reverse mullet to start a post out right. As I started making notes on a post about findability, something occurred to me. Though it should seem obvious, truly separating presentation from business logic is key in ensuring usability and ease of maintenance. Several benefits can be gained with the separation of business and presentation logic including wiring for a strong site architecture, solid, clear HTML with minimal outside code interfering and the ability to integrate a smart, smooth user experience without concern of breaking the business logic that drives it.
User self selection is a mess. Let’s get that out in the open first and foremost. As soon as you ask the user questions about themselves directly, your plan has failed. User self selection, at best, is a mess of splash pages and strange buttons. The web has become a smarter place where designers and developers should be able to glean the information they need about the user without asking the user directly.
Every time I wander the web I seem to find it more complicated than the last time I left it. Considering this happens on a daily basis, the complexity appears to be growing monotonically. It has been shown again and again that the attention span of people on the web is extremely short. A good example of this is a post on Reputation Defender about the click-through rate on their search results.
It’s been a while since I last posted, but this bears note. Search engine optimization, commonly called SEO, is all about getting search engines to notice you and people to come to your site. The important thing about good SEO is that it will do more than simply get eyes on your site, but it will get the RIGHT eyes on your site. People typically misunderstand the value of optimizing their site or they think that it will radically alter the layout, message or other core elements they hold dear.
I only post here occasionally and it has crossed my mind that I might almost be wise to just create a separate blog on my web server. I have these thoughts and then I realize that I don’t have time to muck with that when I have good blog content to post, or perhaps it is simply laziness. Either way, I only post when something strikes me.
It’s been a while since I have posted. I know. For those of you that are checking out this blog for the first time, welcome. For those of you who have read my posts before, welcome back. We’re not here to talk about the regularity (or lack thereof) that I post with. What we are here to talk about is supporting or not supporting browsers. So first, what inspired me to write this? Well… this:
If there is one thing that I feel can be best learned from programming for the internet it’s modularity. Programmers preach modularity through encapsulation and design models but ultimately sometimes it’s really easy to just throw in a hacky fix and be done with the whole mess. Welcome to the “I need this fix last week” school of code updating. Honestly, that kind of thing happens to the best of us.
I have a particular project that I work on every so often. It’s actually kind of a meta-project as I have to maintain a web-based project queue and management system, so it is a project for the sake of projects. Spiffy eh? Anyway, I haven’t had this thing break in a while which either means that I did such a nice, robust job of coding the darn thing that it is unbreakable (sure it is) or more likely, nobody has pushed this thing to the breaking point. Given enough time and enough monkeys. All of that aside, every so often, my boss comes up with new things that she would like the system to do, and I have to build them in. Fortunately, I built it in such a way that most everything just kind of “plugs in” not so much that I have an API and whatnot, but rather, I can simply build out a module and then just run an include and use it. Neat, isn’t it?
Happy new year! Going into the start of the new year, I have a project that has carried over from the moment I started my current job. I am working on the information architecture and interaction design of a web-based insurance tool. Something that I have run into recently is a document structure that was developed using XML containers. This, in and of itself, is not an issue. XML is a wonderful tool for dividing information up in a useful way. The problem lies in how the system is implemented. This, my friends, is where I ran into trouble with a particular detail in this project. Call it the proverbial bump in the road.
Something that I have learnt over time is how to make your site accessible for people that don’t have your perfect 20/20 vision, are working from a limited environment or just generally have old browsing capabilities. Believe it or not, people that visit my web sites still use old computers with old copies of Windows. Personally, I have made the Linux switch everywhere I can. That being said, I spend a certain amount of time surfing the web using Lynx. This is not due to the fact that I don’t have a GUI in Linux. I do. And I use firefox for my usual needs, but Lynx has a certain special place in my heart. It is in a class of browser that sees the web in much the same way that a screen reader does. For example, all of those really neat iframes that you use for dynamic content? Yeah, those come up as “iframe.” Totally unreadable. Totally unreachable. Iframe is an example of web technology that is web-inaccessible. Translate this as bad news.
By this I don’t mean that you should fill every pixel on the screen with text, information and blinking, distracting graphics. What I really mean is that you should give yourself more time to accomplish what you are looking to do on the web. Sure, your reaction to this is going to be “duh, of course you should spend time thinking about what you are going to do online. All good jobs take time.” I say, oh young one, are you actually spending time where it needs to be spent? I suspect you aren’t.