As an avid script reader, I set out to read and review the scripts on the 2021 Black List, an annual ranking of top unproduced screenplays as voted by Hollywood insiders. In analyzing the data using my studio's unique script engagement scoring system, I found inconsistencies and ties in the traditional 1-10 rating approach. By comparing metrics like average page-by-page interest, 10-page hooks, and trajectory, I created an alternative ranking to address limitations like script ceiling effects. While numerical scores provide useful snapshots, the qualitative lessons learned about crafting compelling openings, sustaining intrigue, and sticking the landing better equip writers to impress readers and elevate their scripts. My goal is designing a rating methodology that balances quantitative consistency and qualitative insight into the scripts' strengths.
As part of the Digital Discovery Assessment Branch, I help teams determine if their project proposals warrant full Discovery studies. We created a "First Look" rapid assessment process to efficiently review submissions and decide next steps. I lead cross-functional teams in semi-structured interviews using an online whiteboard to capture details on vision, problems, priorities, blockers and people across 6 key areas. We share notes in real-time, summarise afterwards for validation, then triage to discovery, hand-back or development. Though effective, we continually refine the standardized framework, interview principles and transitions to subsequent phases. My goal is equipping stakeholders with evidence-based options while setting expectations that discovery entails collaborative, solution-agnostic research rather than just app building.
As part of the Digital Discovery Assessment Branch, I help assess project proposals submitted through the Digital Discovery Gateway to determine if they require a full Discovery, can be handled internally, or passed to a development team. To prioritise proposals and prevent overstretching our limited team, we created a "First Look" review process. I assembled a cross-functional team and used an Agile approach to define the First Look framework for rapidly gaining enough information to recommend next steps. We aligned questions around user research pillars like scope, recruitment, people and infrastructure to elicit key details. The collaborative First Look reviews now help us effectively funnel proposals to appropriate solutions while setting client expectations that Discovery entails exploratory, research-led collaboration rather than just app building.
As a researcher, I leverage validated research questions across projects to maximize insight per session. I visualize user needs, methods and questions together to spot repetition and refine wording. This "marathon" approach frontloads research then consolidates learnings into focused design phases. Though intense, concentrating questions and users boosts efficiency. I set expectations on cadence, ensure templates and tools enable seamless capture and analysis, and build in breaks to maintain team health. Continually evaluating if the breadth-first sequence meets goals, I course correct as needed. The aim is equipping stakeholders with comprehensive evidence to decide next steps, while creating scalable, sustainable processes for impact.
As a freelance researcher, I created a flexible user research strategy to efficiently launch multiple university products on deadline. Using ResearchOps principles, I first gathered background and requirements, centralized notes in an online whiteboard, defined project purpose and key questions. After aligning with stakeholders on priorities, I framed research around must-have and should-have insights. To enable focus on execution, I standardized documents and got agreement on potential outcomes. Still iterative, this consultative approach brought clarity on the minimal viable deliverables to meet goals on budget and on time, while creating processes to enable continual improvement through evidence-based decision making at scale.
As a user researcher, I've done a 180 on personas - from initially creating them by default to then hating and avoiding them, to now recognizing their value for communicating insights. I realized researchers aren't the main users; our teams are. Done poorly, personas overgeneralize and stereotype. But reimagined carefully as representative "hats" users wear in different contexts, they compellingly encapsulate roles, attitudes and needs. I now create trait-based titles, custom illustrations, key quotes and needs summaries to bring archetypes to life, helping teams empathize and design inclusively. Personas aren't fixed - users combine and switch them situationally. Still imperfect, updating iteratively based on feedback is key to personas that resonate.
As part of a user research team for the UK Ministry of Defence, I led qualitative studies across the organisation to inform future cloud computing capabilities. Interviewing diverse stakeholders, my team uncovered complex requirements balancing security, agility, data needs and more. Developing personas, we visualized relationships enabling some users to provide infrastructure relied upon by others, like those deployed operationally. Mapping tensions between policy mandates and job demands revealed opportunities for next generation cloud services to smooth conflicts through thoughtful trade-offs. Having synthesized key findings across classification levels spanning officials to intelligence analysts into memorable personas, I effectively communicated insights from this rich research revealing the ecosystem of competing priorities across the Ministry.
As a researcher refining a repository of over 1,000 user stories, I transformed disorganized wishes into insights by standardizing needs as "As a/Who is/I need to/So that" statements. Categorizing by journey stage and mapping to themes revealed gaps and duplication. Consolidating outcomes and user types into dropdowns enabled powerful grouping and analysis. Consulting the team to assign category-level epics further summarized needs for stakeholder communication. Though iterative, this structured approach yielded a living catalog of clean, actionable, measurable archetypal needs. Continually revisiting across projects fosters a shared understanding of what success means for our users.
As a researcher, I built a centralized repository to consolidate dispersed user insights and prevent duplication. First gathering project artefacts from myriad storages, I filtered outputs then meticulously tagged and categorized in a database. Refining hundreds of needs into standard templates revealed themes and gaps. Making consultation and contribution routine equips teams to build on knowledge. Though intensive initially, this living reference of clean information better aligns efforts to what matters most - our users. Continually revisiting the source of truth will multiply the payoff over time.
As a researcher testing content, I isolate variables to pinpoint impact, benchmarking revisions to the original. With measurable goals set, observations beat self-reporting. My "highlighter test" elicits qualitative feedback at scale - participants mark passages as useful or confusing. I combine this highlighting with stickynote comments, while standardizing "increase" and "decrease" confidence color coding. In unmoderated tests, comment-only Google Docs and targeted survey questions add context. Though imperfect, gathering both metrics and “why” explanations better equips rewrites to meet needs. Continually refining methods to elicit actionable improvements makes content better serve users.
As a facilitator of remote writers rooms starting from scratch, I focused first on building connections, not technology. Once together in person, we selected free platforms balancing accessibility, ease of use and features. Though imperfect, Zoom conferences, Slack discussions, Trello boards and Google Docs drafts enable global idea sharing and co-creation. While tools facilitate asynchronous commenting and scheduling, culture cultivates constructive mindsets. With structure but without judgment, multiple perspectives compound. More than project management, this fellowship of respect and curiosity seeds fertile soil for stories to organically sprout and blossom. Our shared goal of nurturing narrative excellence through a collaborative writing process germinates rich characters and intricate plots.
As an advocate of collaborative writing, I see an opportunity to evolve ossified creative processes. Though technological advances expand content access, stories are still solo endeavors culminating in overwhelming revision notes. By contrast, Agile methods enable continuous structured feedback, benefiting even website copy. Believing quality collective creation outperforms isolated efforts, I launched an experiment - remote writers rooms crafting entirely new works. With a north star vision and culture of respect, multiple perspectives compound quickly. Our goal is proving excellence emerges more readily through a fellowship pursuing iterative excellence than a lone genius. If we succeed, our storytelling could revolutionize how original, binge-worthy tales are conceived.
As a researcher focused on improving government services, I've found public sector users differ from consumers. Job titles proved too broad, so we created role-based personas capturing duties and needs. Consulting the cross-department research community provided insights and participants. Interviews risked straying into justification without firm moderation. Anonymity and security constraints occasionally hindered responses. Still, traveling nationwide to offices yielded observational gems. Balancing project team presence against candid feedback was an art. Though demanding navigating complex hierarchies, empowering talented civil servants via evidence-based recommendations is rewarding work. Streamlining efforts would further impact while respecting these users' expertise and commitment.
As a researcher, I see user insights as akin to statistical significance - both inspire confidence, not certainty. Just as data lacks meaning sans significance testing, design and product changes risk harm without underlying user research. Directly engaging representative users exposes motivations and barriers that metrics miss. Though imperfect, iterative immersion builds practical wisdom about problems and needs. Still, neither guarantees "right" answers; combining perspectives including analytics provides rigor while avoiding overreliance. Ultimately, embedding ongoing inquiry puts people before assumptions, processes or tools. Continually recentering evidence-based understanding keeps products evolving to serve users better.
As a plain language content designer streamlining guidance, we embed empathy through a rigorous co-creation process. After validating audience needs, writers draft comprehensible articles, then loop subject matter experts to confirm accuracy. Strict version control ensures amendments target facts without injecting jargon. Securing final approval locks down publications fulfilling user goals in plain terms. Though linear, cross-functional collaboration yields more readable directives benefiting citizens and government alike. Continually monitoring performance focuses iterations, while significant updates trigger full redesigns. By collectively purging legalese and speaking plainly to people’s priorities from the start, we transform opaque documents into accessible content that earns readers’ trust.
Copy
Retry
As a content designer focused on readable guidance, I insist every page serves a user need, or I won't publish it. Structuring articles around "As a..., I need to..., So that..." statements boosts relevance. Analytics reveal pages aligning to needs garner more visits, longer attention, and increased bounce rates - indicating we efficiently answer readers' questions. Though counterintuitive, elevated exits suggest we meet a precise demand then comprehensively equip users to proceed. Continually confirming pages directly address audience goals through data and feedback focuses iterations. Writing expressly to help specific people with defined tasks better serves public needs while advancing our mission. Embedding purpose-driven empathy transforms obligatory documents into valued resources.
As a content designer streamlining guidance, I employ metrics to track improvements. We shortened bluetongue instructions from 2,874 to 304 words while boosting actionability. Readability scores quantify complexity reductions through less jargon and simpler syntax. Word frequency analysis reveals heightened user focus, with 10 times more "you" references. Though limited, data insights complement qualitative feedback to confirm we measurably meet aims like clarifying audience, purpose and legal duties. Continually tracking indicators over time demonstrates if rewritten guidance better informs the public and increases compliance. But ameliorating rigid documents is only the first step; we must engage users ongoing to ensure guidance evolves to serve their needs.