The Ethnographic Lens: Perspectives and Opportunities for New Data Dialects

I’ve also always balked at the division of data into qualitative and quantitative, believing that behind every quantitative measure is a qualitative judgement imbued with a set of situated agendae.

This is a great piece of writing (much less dense than Epic often can be), and the message is compelling. Churchill is Director of User Experience at Google - basically the home of 'big data' - so it's fascinating to get an insight into how they are using data as a jumping-off point for further ethnographic exploration ('ethnomining').

Why it's important to understand that agile is radically different to what went before

However, if you’re on the edge of delivery work or even further away in the organisation, you may just think that agile is a bit different to what’s gone before. You may not be close enough to delivery to experience just how radically different agile is.
This is important, because, if you think agile is not radically different to what’s gone before, you might believe that the processes and ways-of-working used in the enabling functions around delivery, that have evolved to enable non-agile delivery approaches, do not need to be changed to enable agile delivery.

I see this all the time. People not directly working in delivery teams appreciate that agile is different, but don't realise quite how different. This leads them to underestimate the need for other parts of the organisation to change as well, and results in tams with incompatible working practices constantly butting heads. Not fun.

User Experience Research and Strength of Evidence

User Experience research is about observing what people do. It’s not about canvassing people’s opinions. This is because, as data, opinions are worthless. For every 10 people who like your design 10 others will hate it and 10 more won’t care one way or the other. Opinions are not evidence.
Behaviours, on the other hand, are evidence. This is why a detective would much rather catch someone ‘red-handed’ in the act of committing a crime than depend on hearsay and supposition. Hence the often-repeated advice: “Pay attention to what people do, not to what they say.” It’s almost become a UX cliché but it’s a good starting point for a discussion about something important: strength of evidence.

In the analysis of almost every user research session I've seen, at least someone has said: "oh [the user] really didn't/did like that". If I was being charitable I'd say this is a problem of terminology - people use 'like' when they mean 'positive experience' - but I think more often it's a fundamental misunderstanding of why we're doing research at all.

Based on Hodgson's post, it sounds like that misunderstanding is a common one:

Some years ago while working for a large corporation I was preparing a usability test when the project manager called and asked me to send over the list of usability questions.
“There are no questions in usability,” I replied.
“What do you mean?” she asked, “How can there be no questions? How are you going to find out if people like our new design?”
“But I’m not trying to find out if they like it,” I pointed out in a manner that, in hindsight, seems unnecessarily stroppy, “I’m trying to find out if they can use it. I have a list of tasks not a list of questions.

Effective Product Roadmaps

Melissa Perri on creating better roadmaps:

Notice here that our Roadmap does not include any specifics on how we plan on tackling these problems. This is because we are still experimenting with options- we haven’t laid out a plan to implement a set of features or solution components.

I love the idea that the focus should be on the problem not the solution, but I know from my own experience how hard this can be when you have Sales and Marketing wanting to woo potential customers with promises of specific new features.

This Future Looks Familiar: Watching Blade Runner in 2017

Really beautiful post by Sarah Gailey:

But I would not call the world of Blade Runner strange, because it’s the opposite of strange. It’s familiar. If you subtract the flying cars and the jets of flame shooting out of the top of Los Angeles buildings, it’s not a far-off place. It’s fortunes earned off the backs of slaves, and deciding who gets to count as human. It’s impossible tests with impossible questions and impossible answers. It’s having empathy for the right things if you know what’s good for you. It’s death for those who seek freedom.

It’s a cop shooting a fleeing woman in the middle of the street, and a world where the city is subject to repeated klaxon call: move on, move on, move on.

It’s not so very strange to me.

Last week I watched Blade Runner for the first time. I can't say that I had the same impression as Gailey; I found it visually beautiful and very weird and not especially enjoyable, but that's because I'm not as insightful as her. Having read this, I think she's spot on.

Moneyball, and the Fetishization of Data

In the same vein as Angela Bassa's post on 'data not being ground truth', Julia Rose West has a very good piece for Slate discussing how Moneyball (and other similar pop-culture influences) have tipped the scales too far the other way - people are now not skeptical enough when presented with data.

Julia Rose West:

But this moneyball-ization assumes that all information is reliable information, algorithms are unbiased magic, and big data can also paint the big picture. The scenarios where this has already happened have become all too familiar. Take the 2016 election. Most of us put our faith in the forecasting numbers, charts, maps, and needles that told us Hillary Clinton would be in the White House now. It took for the dissonance we experienced late Nov. 8 to consider the sources behind those predictions. Cade Metz wrote in Wired that Trump’s win “wasn’t so much a failure of the data as it was a failure of the people using the data ... a failure of the willingness to believe too blindly in data, not to see it for how flawed it really is.”

Data Alone isn't Ground Truth

Really thoughtful post by Angela Bassa on the importance of being skeptical when presented with 'data-driven' conclusions.

New Platforms, AI, & Evolving the Organization

Jason Costa of GGV on how machine learning technologies might change the role of product managers:

In the context of AI & deep learning, these PMs will need to understand what is possible right now, and then plan for what data they need to collect to make more things possible in 2 years, 5 years, and so on. That means it will be much more than just building a feature roadmap; it will also require a data roadmap. Furthermore, it won’t be just a matter of getting any data and massaging that set — products will need to be built with data collection in mind from the start. PMs will need to figure out what signals are important, and then build the product to collect the best data sets providing those signals.

This idea that we need to be thinking now about what we want to be able to do in several years time is a problem I'm grappling with at the moment. Placing a big bet that we won't know was successful until years down the line is basically the opposite of an 'Agile' approach.

The product manager is dead. Long live the value manager

Lots of product managers do not manage products, they’re managing services. It’s no longer enough just to help teams decide on the new features of their service, we need to focus on the ‘so what?’

- how will this actually increase the value of the service, for users and for our organisation?
- how do we hold ourselves accountable for increasing the value of our service, and empower our teams to use their combined skills to achieve this increase in value?

I’d like to propose that we are value managers, not product managers.

As a product manager who has recently moved into a role in which I'm not directly affecting a 'product', this idea appeals to me. But I have a niggling feeling that actually this definition dilutes too far the product manager's job: Would not the majority of managers in most organisations recognise themselves in Colfer's definition?

How Udemy combined Personas, JTBD, and Journeys to make a more complete user story

Really enjoyed this post by Claire Menke on the 'foundational research framework' used by the team at Udemy. I have been struggling recently with how our research insights can best be shared with the wider organisation, and whilst Menke's post doesn't address that point directly, the way her team have combined personas, Jobs-to-be-Done, and user journeys feels like the right approach. Treating the three as related - but at different levels of abstraction - gets past lots of the which-is-the-right-way conversation that we've been trying to overcome because it shows there is no 'right' way, just the right level of abstraction for your needs.

You should read the piece, but this image sums her perspective up:

Udemy's Foundational Research Framework / Claire Menke 2017

Udemy's Foundational Research Framework / Claire Menke 2017

Real Madrid's galácticos remembered

Loved this Four-Four-Two retrospective on Real Madrid's "galácticos" team (but don't call them that to their face). As a 10-year-old, the idea of this team of foreign mega-stars was intoxicatingly exotic, and hearing Figo, Ronaldo, Zidane, and Roberto Carlos reminisce together is pure footballing nostalgia. If you've got any affection for early-noughties football, read it.

It was fun on the pitch, too. “We might have been more of a team before then but there was something about that side that meant you went onto the pitch thinking: ‘I wonder what they’ll come up with next’,” remembers former right-back and FFT columnist Michael Salgado. “I enjoyed playing in that team so much,” says Zidane. “The opposition might score two, three goals ... nowadays, if that happened, you’d say: ‘we’re going to lose’. But we didn’t. No pasa nada. They scored two? We’ll score three. It was fun.” Roberto Carlos puts it in simple terms: “We were all like kids enjoying ourselves out on the pitch when we were together.”

Everyone agreed on who the greatest talent was. Well, almost everyone. “Only Zidane would say that Zidane was not the best. We would joke about the fact that he was the only one who didn’t think he was the most skilful player of his generation,” Roberto Carlos says. Whenever David Beckham was asked about the Frenchman, there was a kind of reverential hush about the way he answered, almost in a whisper. “Zidane was the best,” Ronaldo agrees, “no doubt about that. Everything came so easily to him. His control was incredible. He was the best player I played with.”

The feeling is mutual and Roberto Carlos is proven right: “Ronaldo had the most talent,” Zidane says. “Ronaldo! He didn’t need to train, the cabrón!” laughs Figo, destroying at a stroke Roberto Carlos’s politically correct insistence that you couldn’t be any good if you were a little on the laid-back side. “He was so good that he didn’t need to train.” Zidane agrees: “Once in a while he didn’t fancy training, but the thing is that Ronaldo was such a good player and such a good person that in the end no one really minded.”

The Art of the Awkward 1:1

Mark Rabkin on how to make your 1:1 meetings awesome, by making them awkward:

Very often, people waste most of the 1:1s potential. You might make a little agenda, and then give some updates, some light feedback, and share some complaints. It’s helpful and valuable and nice. But, ask yourself: is the conversation hard? Are you a little nervous or unsure how to get out what you’re trying to say? Is it awkward?
Because if it’s not a bit awkward, you’re not talking about the real stuff.

I've always thought I had productive 1:1 meetings, but this post makes me think differently. We were only ever talking about the easy stuff, things that were safe or uncontroversial. No one can achieve their full potential if they're not told honestly about things they can improve on.

Agile retrospectives do a great job of countering the common human desire to avoid confrontation by formalising the time where you're encouraged to give (for want of a better word) negative feedback. Making 1:1s follow a similar pattern is a logical next step in self-improvement, even if it might make things a bit awkward.

Good culture evolves from the bottom up, but only when those at the top allow it

I really enjoyed this blog from Stephen Foreshew-Cain (Executive Director at the Government Digital Service). In it, he describes how the working culture at GDS is created, maintained, improved. It all sounds great, and it's well known that GDS has embedded behaviours and attitudes that are the envy of most digital teams working in large organisations.

Foreshew-Cain's overriding message is that culture 'evolves from the bottom', but I'm interested in the extent to which employees of an organisation can successfully drive culture changes without support from above. As Foreshew-Cain admits, this was easier at GDS because they were starting from more or less a blank piece of paper:

At GDS, we’re fortunate because we’re a relatively new organisation. We were able to build our own culture from scratch.

Much of what GDS has become began in its very early days, when a small team of people were building the GOV.UK alpha. But since then it has iterated, evolved, and changed, just like the products and services we make.

And it's not just that they were starting from scratch. Culture may come from below, but it requires the leaders of your organisation to create the environment where those at the bottom feel empowered to drive cultural change.

Foreshew-Cain again:

You can’t impose culture upon your team. You can’t tell them how to act.

Your job as a leader is to provide the right environment in which culture can emerge and evolve all by itself. That means trusting your people, and ensuring they feel safe; safe to ask questions, safe to make mistakes, safe to do what they think is right.

I completely agree with Foreshew-Cain here, but it does show the 'culture evolves from the bottom' statement to be slightly disingenuous. Yes, those at the bottom of the pile are best placed to define and drive an organisation's culture, because they are most closely impacted by it. And, equally, cultural initiatives will never take root if they're handed down from the executive level. But you need first for the managers and senior leaders to want their employees to shape the organisation's culture, and that requires a culture at the senior level of its own. How do you create that?

How 'Making a Murderer' Goes Wrong

I loved Netflix's 'Making a Murderer' documentary series, but I spent most of the last episodes expecting to hear more of the State's case, and perhaps any doubts the film-makers had about Steven Avery's claims of innocence. In the first series of the 'Serial' podcast, Sarah Koenig was thorough about presenting both sides of the case against Adnan Sayed, and seemed genuinely unsure as to whether she should believe him innocent or guilty. There was none of that even-handedness in 'Making a Murder'.

Kathryn Schulz spoke to Penny Beernsten, who has a significant role in the telling of Steven Avery's story:

Given her history, Beerntsen does not need any convincing that a criminal prosecution can go catastrophically awry. But when Ricciardi and Demos approached her about participating in “Making a Murderer” she declined, chiefly because, while her own experience with the criminal-justice system had led her to be wary of certitude, the filmmakers struck her as having already made up their minds. “It was very clear from the outset that they believed Steve was innocent,” she told me. “I didn’t feel they were journalists seeking the truth. I felt like they had a foregone conclusion and were looking for a forum in which to express it.

This is exactly how the series felt to me, like the film-makers were only interested in presenting the case for Avery's innocence.

As Schulz concludes:

Toward the end of the series, Dean Strang, Steven Avery’s defense lawyer, notes that most of the problems in the criminal-justice system stem from “unwarranted certitude”—what he calls “a tragic lack of humility of everyone who participates.” Ultimately, “Making a Murderer” shares that flaw; it does not challenge our yearning for certainty or do the difficult work of helping to foster humility. Instead, it swaps one absolute for another—and, in doing so, comes to resemble the system it seeks to correct.

Coding Without a Safety Net

Yahoo no longer has a QA/testing team - the engineers are expected to test their own code by writing automated quality checks. 

Tekla Perry:

‘It was not without pain,’ Maimon says—though the problems were not as big as he feared. ‘We expected that things would break, and we would have to fix them. But the error that had been introduced by humans in the loop was larger than what was exposed by the new system.’

’It turns out,’ Rossiter chimed in, ‘that when you have humans everywhere, checking this, checking that, they add so much human error into the chain that, when you take them out, even if you fail sometimes, overall you are doing better.’

Reading the rest of the article it sounds like the move was done in quite an engineering-hostile way, but that aside, 'fewer people involved = fewer errors' is probably a good rule of thumb for building anything.

Benedict Evans on Google and Mobile

I have linked to this already on Twitter but this really is a great piece, full of insightful observations and interesting analogies.

Benedict Evans on what Google is:

I generally look at Google as a vast machine learning engine that’s been stuffed with data for a decade and a half. Everything that Google does is about reach for that underlying engine - reach to get data in and reach to surface it out. The legacy web search is just one expression of that, and so is the search advertising, and so are Gmail and Maps - they’re all built onto that underlying asset.

On Google's seemingly schizophrenic attitude towards new projects:

Google tests new opportunities to see if they fit in the same way that a shark bites a surfer to see if they’re a seal. If not, you don’t change Google to fit the opportunity - you spit out the surfer (or what’s left of him).

And then most interesting of all, on who or what Google prioritises on mobile:

In the same sense, Google needs reach, but mobile means that there are lots of different kinds of reach. Consider someone who has an ‘official’ Android phone, perhaps even a Nexus, and is completely logged in - so Google has ‘perfect’ reach to them as an end-point. But, as I wrote here, suppose they live in a quiet suburb and drive only to work and to a few shops, never use Calendar, open Maps once a month and get a few personal emails in Gmail each week. Now contrast that with a 20-something in a big city who loves their iPhone and is not logged into any Google service - but is on this phone for hours every day, uses Google Maps (or maybe just apps that embed it) and is doing web search all the time. What kind of reach does Google have for these two?
Then, consider a farmer in rural Myanmar who’s just got their first phone: a $30 Android, with enough spending power to get perhaps 50 megs of cellular data a month, if that. What is that reach worth - what do they search for, what can the information they provide to Google be used for and, to raise the boring, pedantic question, how much are they worth to the advertising industry? Are they a higher priority than extending Google Now to the Apple Watch?

This is the most interesting question of all. Mobile is spreading further than desktop ever could, because these small devices can be so cheap. And where they’re cheap, they’re almost always Android; that is to say they’re almost always Google. This means more data than ever to feed the machine, and some might say even too much: as Evans queries, how can Google prioritise the analysis of such a vast downpour of data?

Erika Hall On Surveys

When you are choosing research methods, and are considering surveys, there is one key question you need to answer for yourself:

Will the people I’m surveying be willing and able to provide a truthful answer to my question?

And as I say again and again, and will never tire of repeating, never ask people what they like or don’t like. Liking is a reported mental state and that doesn’t necessarily correspond to any behavior.

Alarm bells should ring if, when encountering a research need, the first answer that comes is "we can do a survey". So, so, so often, the kind of questions you need answers to cannot be satisfactorily answered with a survey.