Metrics That Matter: How to Measure the Academic Impact of Fact-Checking Work
A research-informed framework for measuring fact-checking impact beyond pageviews—policy, citations, education, and misinformation reduction.
Fact-checking is often judged by the easiest number to count: pageviews. But if the goal is to understand whether fact-checking changes public knowledge, improves institutional decision-making, or reduces the spread of falsehoods, raw audience size is only a starting point. A serious evaluation framework needs to measure impact metrics across multiple layers: policy influence, scholarly citation, educational adoption, and misinformation reduction. That shift is especially important as the International Fact-Checking Network (IFCN) and related organizations continue to professionalize the field, even as audiences grow and financial pressures remain a challenge, as highlighted in recent Poynter reporting on the state of fact-checkers in 2025. For context on the ecosystem’s momentum and constraints, see State of the Fact-Checkers: Audiences grow as finances worsen and the broader case for speed and accuracy in breaking-news environments in Fact-checking is crucial when the news moves this fast.
This guide proposes a research-informed framework for fact-checking evaluation that goes beyond vanity metrics. The central idea is simple: if fact-checking is a public-interest intervention, then it should be assessed like one. That means looking for evidence that a fact-check has been read, cited, adapted, taught, discussed by policymakers, embedded in reporting standards, or associated with measurable changes in belief and behavior. In practice, this requires combining media impact assessment, policy tracking, education analytics, and outcomes research into one coherent dashboard.
Pro Tip: A strong fact-checking evaluation plan should not ask, “How many people saw it?” first. It should ask, “What changed because of it?”
1. Why Pageviews Alone Misrepresent Fact-Checking Value
Audience size is a weak proxy for public value
Pageviews are useful for understanding distribution, but they are a poor stand-in for social impact. A fact-check on a niche policy claim may reach only a few thousand readers and still shape legislative debate, newsroom correction practices, or classroom instruction. Conversely, a viral post can attract massive attention while leaving beliefs unchanged. That is why an evaluation model centered only on clicks systematically undercounts the influence of fact-checkers working on specialized or technical claims.
The problem is not just conceptual; it is methodological. Pageviews capture exposure, not absorption, and certainly not downstream effects. A person may skim a fact-check, share it sarcastically, or ignore it entirely. In contrast, impact metrics should try to capture whether fact-checks are used as reference material, cited in formal settings, or linked to behavior change. This is especially relevant in fast-moving news cycles where accuracy has to compete with speed, which Poynter’s recent coverage underscores in its reporting on breaking developments and verification demands.
Why fact-checking needs a public-interest scorecard
Public-interest work is commonly evaluated by indirect and multi-stage outcomes. In journalism, for example, a single investigation might influence policy, litigation, public understanding, and future reporting practices. Fact-checking deserves the same logic. A policy rebuttal may be most valuable when it is quoted in a hearing, a government memo, or a civic education module—not when it trends on social media. Evaluating this work requires a scorecard that acknowledges different forms of reach and different types of authority.
This is also where the IFCN ecosystem matters. The network’s standards have helped professionalize how fact-checkers document sources, corrections, and methodology. But standards for production are not the same as standards for impact. If the field wants better reporting standards for outcomes, it must treat measurement as part of the editorial process. That means designing metrics that are auditable, reproducible, and resistant to cherry-picking.
What a mature evaluation framework should do
A mature framework should combine inputs, outputs, outcomes, and longer-term effects. Inputs include staffing, training, and cost. Outputs include the number of fact-checks published, corrections requested, and syndication partnerships. Outcomes include citation in media, use in classrooms, policy mentions, and shifts in belief or sharing behavior. Long-term effects include improved information literacy, lower acceptance of false claims, and higher trust in credible sources. This layered model is more demanding than analytics dashboards, but it is much closer to reality.
2. The Four-Domain Framework for Measuring Academic and Societal Impact
Domain one: policy influence
Policy influence asks whether fact-checks enter institutional decision-making. Evidence can include citations in legislative debates, testimony, committee reports, agency guidance, local ordinances, or judicial filings. In a robust system, the fact-checker tracks both direct citations and indirect policy uptake, such as when a policymaker adopts a corrected statistic without attribution. This domain should also record the time lag between publication and policy appearance, because influence often occurs weeks or months later.
To operationalize this, create a policy log. Tag each fact-check by issue area, jurisdiction, and claim type. Then monitor public records, transcript databases, and official documents for claim reuse. Pair that with qualitative coding so you can distinguish a symbolic mention from substantive incorporation. A fact-check cited in a hearing to rebut misinformation should count more than a passing name-drop.
Domain two: academic citations
Scholarly uptake is one of the clearest signals that fact-checking has moved from journalism into knowledge production. Academic citations can appear in public health studies, communication research, political science, education, and computational social science. But citation counting alone is too crude. A fact-check may be cited as empirical evidence, methodological precedent, background context, or a critique of misinformation systems. Those distinctions matter because they signal different levels of intellectual influence.
Researchers should therefore track both citation volume and citation purpose. For example, in bibliometric databases, annotate whether the fact-check is cited in theory-building, case analysis, methods discussion, or policy recommendation. This is also where a strong internal workflow matters: organizations that already manage sources and editorial records well are better positioned to support citation discovery. For practical guidance on workflow discipline, compare this approach with the logic behind suite vs best-of-breed workflow automation tools and the importance of data contracts discussed in integration patterns and data contract essentials.
Domain three: educational adoption
Educational adoption measures whether fact-checks are used in classrooms, training programs, library instruction, media literacy workshops, or teacher development. This is especially important because fact-checking can serve as a living example of source evaluation, argument analysis, and evidence-based reasoning. A fact-check that becomes assigned reading in a journalism course or cited in a high school civics lesson is doing more than correcting a claim; it is shaping how learners interpret information.
To measure this domain, organizations can monitor syllabi databases, LMS analytics, reading list submissions, teacher requests, and workshop attendance. They can also ask instructors to self-report usage through simple forms. Better still, they can build educator-friendly landing pages with downloadable lesson plans, discussion prompts, and claim-analysis rubrics. If you want a model for behavior change through teaching, consider how a classroom-based exercise like a classroom prediction league can make critical thinking visible and repeatable.
Domain four: misinformation reduction
The hardest and most important domain is whether fact-checking reduces misinformation. This is where the evidence base is strongest in theory and hardest in practice. A reduction metric can measure belief correction, lower resharing of falsehoods, reduced repeat exposure, or delayed spread after a fact-check publishes. It can also examine whether corrections “stick” over time. Importantly, reduction does not always mean immediate reversal; sometimes it means constraining further spread among key networks or reducing the velocity of a false claim.
Measuring this domain requires a mix of platform data, survey experiments, and observational analytics. A practical approach is to compare the spread of a claim before and after publication of a fact-check, while adjusting for news-cycle intensity and network effects. Teams working on digital traces should also be mindful of verification and privacy constraints, much like the care required in social media as evidence workflows. The goal is not to overclaim causality, but to estimate whether fact-checking is one of the interventions that meaningfully changes the information environment.
3. Building a Measurement System: Methods, Data Sources, and Standards
Start with a claim-level taxonomy
Before measurement comes classification. Not all fact-checks are equally likely to influence policy or scholarship. A taxonomy should classify claims by topic, severity, public salience, evidence type, and potential audience. For example, health misinformation may have different downstream effects than election misinformation, and a claim involving a numerical statistic may be more easily cited than a vague narrative. Without classification, impact data become noisy and difficult to interpret.
A useful taxonomy also separates the “object” of fact-checking from the “channel” of impact. A fact-check on a viral video may be widely shared but not cited in research. A fact-check on a legislative statistic may be lightly viewed but highly influential in policy. This distinction allows organizations to set realistic expectations and identify where editorial resources should go.
Triangulate data instead of relying on one platform
Any robust evaluation should combine at least four data streams: web analytics, citation databases, policy records, and educational usage logs. Web analytics show immediate reach. Citation databases reveal scholarly use. Policy records demonstrate institutional uptake. Education logs show pedagogical adoption. When possible, add social listening or platform-monitoring data to track how false claims and corrections move across networks.
This triangulation matters because each data source has blind spots. Web analytics can miss off-platform sharing. Citation databases can miss non-indexed venues and gray literature. Policy tracking can miss local or informal influence. Classroom adoption can be invisible if educators reuse content without attribution. The point is not to find one perfect metric, but to assemble enough imperfect measures to produce a credible picture. That is the same logic behind strong operational systems in other fields, including building a repeatable operating model and the measurement discipline described in measuring productivity impact.
Establish reporting standards early
Reporting standards should define what counts as a policy citation, an educational adoption event, or a misinformation reduction signal. They should specify time windows, sampling methods, and inclusion criteria. For example, if a government document paraphrases a fact-check without linking it, does that count as influence? If yes, under what verification threshold? If a teacher assigns a fact-check once, does that count as adoption, or only repeated use?
These questions sound bureaucratic, but they are essential for trustworthiness. Clear standards prevent organizations from inflating impact and make cross-organization comparisons possible. They also align with the broader logic of IFCN-style transparency: document the method, disclose limitations, and keep definitions stable enough to compare across time.
4. Measuring Policy Influence in Practice
Track formal and informal policy pathways
Policy influence can occur in formal institutions and informal advisory channels. Formal pathways include hearings, memos, public comments, white papers, and lawmaking records. Informal pathways include staff briefings, stakeholder meetings, and media coverage that shapes policy framing. A fact-check may never appear in a final statute yet still influence the language surrounding a bill. That still counts as a meaningful effect.
To track this systematically, build an entity list of agencies, committees, legislators, and issue advocates relevant to your beats. Set up alerts for key claims and create a monthly review process. If a fact-check is linked in a public official’s statement or embedded in a briefing deck, tag it immediately. Over time, these tags become a policy-influence map that reveals which topics, formats, and authors generate the strongest institutional response.
Use a weighted influence score
Not every policy mention should count equally. A weighted score can assign more value to direct citations in official documents than to media paraphrases. For example, a hearing citation might be weighted higher than a social media repost by a staffer, and a regulation reference might be higher still. Weighting should be documented and publicly explained so that the system remains transparent rather than arbitrary.
One practical model is a 1–5 scale: 1 for informal mention, 2 for media quotation in a policy context, 3 for staff or committee reference, 4 for formal inclusion in a draft document, and 5 for direct citation in enacted policy or official guidance. These weights will vary by institution, but the principle is the same: influence is not binary. It has degrees, and those degrees matter.
Case example: why a low-view, high-influence fact-check matters
Imagine a fact-check about a misleading crime statistic used in a city council debate. The article receives only moderate traffic, but the corrected number is later repeated in a briefing memo, then quoted by a local paper, then reflected in a public safety presentation. In a pageview-only model, that fact-check would look unremarkable. In an impact model, it would be recognized as a high-value correction with cascading institutional effects. That is exactly the kind of outcome fact-checking organizations should learn to capture.
5. Measuring Academic Citations and Scholarly Uptake
Go beyond raw citation counts
Citation counts are useful, but they should be treated as a starting point rather than the endpoint. A fact-check cited in a peer-reviewed article as evidence of misinformation prevalence has a different scholarly role than one cited as a media example in an introduction. You should therefore code citations by function, discipline, and venue. This allows you to see whether fact-checking is primarily used by communication scholars, methodologists, educators, or policy researchers.
It is also useful to track citation quality. Does the citing paper accurately describe the fact-check’s findings? Does it use the fact-check as a proxy for broader misinformation patterns? Does it cite multiple fact-checks from the same organization, or just one? These distinctions can reveal whether the organization is becoming a trusted source or simply a convenient example.
Use bibliometrics and altmetrics together
Bibliometrics show formal scholarly influence, while altmetrics can reveal early attention through policy documents, news coverage, reference managers, and online discussion. Combined, they offer a fuller picture of how academic audiences receive fact-checking work. A fact-check that generates little formal citation today may be heavily discussed in working papers, conference presentations, or preprints before it is indexed.
Organizations can also look for educational and disciplinary diffusion. If a fact-check on vaccine misinformation begins appearing in public health syllabi, digital literacy training, and communication textbooks, it may be operating as a core teaching artifact. That pattern is especially important in a field where journalism courses are changing and educators are looking for practical, contemporary examples to teach verification.
Best practices for citation harvesting
Build a systematic citation-harvesting routine using Google Scholar alerts, Crossref queries, institutional repositories, and subject-specific databases. Search for the fact-check title, URL, organization name, lead author, and key claim phrases. Then manually screen results to eliminate false positives. A structured spreadsheet should record citation date, publication type, discipline, and purpose. That spreadsheet becomes your longitudinal scholarly impact record.
For stronger reporting standards, document the search strategy itself. If later you need to show that your citation analysis is reproducible, you will have an audit trail. This is the same quality-control mindset that underpins reliable technical workflows in other domains, including vetting third-party science and the workflow discipline found in search and naming systems.
6. Measuring Educational Adoption and Learning Impact
From downloads to curricular integration
Educational adoption should be measured at multiple levels. A download of a classroom handout is not the same as curricular integration, and a one-off workshop is not the same as repeated semester use. The strongest indicators are syllabus mentions, assignment incorporation, and institutional partnerships with schools, libraries, or teacher training programs. These markers show that fact-checking is being embedded in pedagogy rather than merely consumed as content.
Organizations should also look at the surrounding materials. If instructors are pairing fact-checks with source evaluation rubrics, media literacy worksheets, or student reflection prompts, that indicates deeper educational value. Tracking those combinations can help organizations design more usable products for teachers and librarians.
Use educator feedback loops
Feedback from educators can reveal what fact-checks are most teachable. Some topics are too technical for general audiences but excellent for instruction because they show how evidence is checked. Others are too partisan or context-specific to work well in classrooms. Surveying educators helps separate high-engagement content from high-utility content. The best fact-checking organizations treat teachers as partners, not just users.
One effective practice is to create a lightweight educator form asking three questions: Did you use the fact-check? How did you use it? What student reaction did you observe? This gives you both quantitative and qualitative data. Over time, these responses can guide editorial design, format selection, and topic prioritization.
Learning outcomes matter more than attendance
The ultimate educational question is whether fact-checks improve critical thinking, source evaluation, or correction of misconceptions. Short pre/post tests, reflective writing tasks, and classroom discussion analysis can help estimate this. Even simple outcome measures, such as increased ability to identify unsupported claims or distinguish evidence from opinion, are meaningful. These outcomes are often more valuable than attendance metrics because they speak to knowledge transfer.
For inspiration on designing teaching experiences that build durable skills rather than passive consumption, see designing tasks that build, not replace, language skills. The same principle applies here: fact-checking should not merely inform learners; it should strengthen their ability to evaluate claims independently.
7. Measuring Misinformation Reduction with Scientific Rigor
Define the mechanism of change
Misinformation reduction is not a single outcome. It can mean reduced belief, reduced sharing, reduced confidence in a false claim, or reduced persistence over time. You need to specify the mechanism before you can measure it. For example, if your intervention is a fact-check article, the relevant mechanism might be belief correction. If it is distribution on social media, the mechanism may be sharing inhibition. If it is educational adoption, the mechanism may be improved claim evaluation.
Once the mechanism is defined, choose indicators that fit it. Surveys can measure belief change. Platform analytics can estimate resharing. Experimental designs can compare exposure to correction versus no correction. Longitudinal follow-up can show whether the correction decays or endures. Without this specificity, “misinformation reduction” becomes too vague to measure credibly.
Use quasi-experimental and experimental approaches
When possible, pair observational data with experiments. Randomized exposure studies can test whether a fact-check changes belief or sharing intentions. Quasi-experimental methods can compare similar claims with and without fact-check coverage. Interrupted time series can examine whether spread slows after publication. No single method is enough on its own, but together they can establish a plausible causal story.
Researchers should be careful not to overstate what they can prove from platform data alone. Claims spread in ecosystems shaped by news events, political identity, algorithmic ranking, and influencer networks. A good evaluation framework therefore combines evidence of direct correction with evidence of dampened virality. That balanced approach is more defensible than trying to claim that one article eliminated a false narrative.
Measure persistence, not just immediate correction
Immediate correction effects often fade. The harder question is whether fact-checking leaves a durable residue that protects against repeat exposure or reactivation. To test this, collect follow-up data days, weeks, or months later. If a false claim resurfaces, do people remember the correction? Are they less likely to reshare it? Do they cite the fact-check in rebuttal? These are more meaningful than short-term click spikes.
A useful analogy comes from operational resilience and safety work, where the point is not only to stop a threat once but to reduce future susceptibility. For a broader example of systems thinking, see blocking harmful sites at scale, which illustrates how monitoring and enforcement require both immediate response and long-term controls.
8. A Practical Dashboard for Fact-Checking Organizations
Core dashboard categories
A usable dashboard should include four panels: reach, influence, knowledge uptake, and correction effects. Reach includes visits, referrals, and social shares. Influence includes policy citations and media references. Knowledge uptake includes academic citations and educational adoption. Correction effects include belief shifts, reduced sharing, and sustained memory of the correction. Together, these panels prevent overreliance on one type of success.
Dashboard design should support decision-making. Editors need to know which topics are likely to generate policy attention. Funders need to know whether the organization is building institutional value. Researchers need to know whether the work is contributing to scholarship. Readers need transparency about what the organization counts as success and why.
Comparison table: metrics, data sources, and limitations
| Metric category | What it measures | Primary data source | Strength | Limitation |
|---|---|---|---|---|
| Pageviews | Exposure to the fact-check | Web analytics | Easy to collect | Does not show influence |
| Policy citations | Institutional uptake | Hearings, memos, official documents | High public-interest value | Slow to appear and hard to detect |
| Academic citations | Scholarly recognition | Scholar databases, repositories | Signals intellectual authority | Misses informal and non-indexed use |
| Educational adoption | Classroom and training use | Syllabi, LMS logs, educator surveys | Shows learning utility | Often invisible without reporting |
| Misinformation reduction | Behavioral and belief effects | Surveys, experiments, platform data | Closest to true intervention impact | Causality is hard to establish |
Weight metrics by mission and audience
Not every organization needs the same dashboard. A local newsroom may prioritize policy influence and community correction. A research-oriented lab may prioritize academic citations and experimental evidence. A civic education nonprofit may prioritize classroom adoption and learning outcomes. The dashboard should reflect mission, not just generic best practice.
This mission-based approach is similar to how a strong business or product team chooses between different operational models. For example, organizations often need to decide whether to adopt a broad platform or specialized tools, much like the tradeoffs discussed in turning hype into real projects and the practical comparison in comparing cloud agent stacks. The right metric system, like the right stack, depends on the use case.
9. What Funders, Editors, and Researchers Should Ask Next
For funders: ask about outcomes, not just outputs
Funders should ask whether an organization can demonstrate downstream effects, not just content volume. Useful questions include: Which policy arenas have you influenced? Which academic disciplines cite your work? Which educators use your materials? What evidence suggests your work reduces misinformation? These questions force a more strategic conversation about public value.
Funders should also support measurement infrastructure. Impact evaluation takes time, staff, and technical capacity. If the sector wants better evidence, it must finance better tracking systems, not merely demand more proof. That includes support for data governance, metadata standards, and cross-organization benchmarking.
For editors: embed measurement in workflow
Editors should treat measurement as part of the publishing process, not an afterthought. Every fact-check can include standardized metadata fields for issue area, audience, evidence type, and potential policy relevance. This makes later tracking possible. Editors can also flag high-stakes claims for deeper follow-up, such as watchlists for policy or scholarly citation.
Editorial teams should adopt a habit of post-publication review. After 30, 90, and 180 days, review whether the fact-check has been cited, reused, taught, or referenced in policymaking. That review loop can guide future topic selection and show whether the organization’s influence is concentrated in a few domains or distributed more broadly.
For researchers: build comparative studies
Researchers can move the field forward by comparing fact-check formats, distribution strategies, and topic categories. Which corrections are more likely to be cited in academic literature? Which formats are easier for teachers to use? Which topics generate policy attention? Comparative designs can answer these questions more reliably than single-case success stories.
There is also room for cross-disciplinary work on media impact assessment, organizational trust, and network diffusion. The fact-checking field would benefit from more standard protocols that make studies comparable across countries and time periods. Without comparability, we get anecdotes; with it, we get science.
10. A Research-Informed Checklist for Better Reporting Standards
Minimum standard fields for each fact-check
Every published fact-check should ideally include enough structured data to support later analysis. At minimum, organizations should record publication date, topic category, claim type, source language, evidence base, and intended audience. If possible, add metadata for policy relevance, classroom suitability, and anticipated citation value. These fields are simple to collect at publication time and invaluable later.
Standardized metadata also improves discoverability. Scholarly researchers, educators, and policymakers can more easily find relevant fact-checks if the organization uses consistent tags and titles. That consistency is part of trustworthiness: it makes the organization easier to verify, compare, and cite.
Benchmarks for interpreting success
A benchmark system helps organizations interpret their own data. For example, a fact-check with low traffic but high citation in policy or research may be a major success. A high-traffic correction with no downstream effects may be valuable for awareness but weaker on deeper impact. Benchmarks should therefore be relative to mission and topic.
One useful practice is to define “signal tiers”: Tier 1 for exposure, Tier 2 for reuse, Tier 3 for institutional uptake, and Tier 4 for demonstrated change. This keeps teams from mistaking visibility for value. It also helps communicate impact to funders and partners in a transparent way.
How to avoid metric gaming
Whenever metrics become targets, they can be gamed. To reduce that risk, use multiple measures, rotate emphasis over time, and include qualitative review. If an organization optimizes only for pageviews, it may chase sensational claims. If it optimizes only for citations, it may neglect public accessibility. Balanced scorecards are harder to manipulate because they reward different kinds of value.
Transparency matters here. Publish your methodology, explain what you count, and acknowledge where your evidence is weak. For organizations operating in fast-moving, high-stakes environments, this kind of honesty is not a weakness; it is a core trust signal.
Conclusion: Measuring Impact Is Part of the Mission
Fact-checking is most valuable when it changes something beyond the article itself. A thoughtful evaluation system can reveal whether that change happens in policy, scholarship, education, or the broader misinformation environment. The field does not need fewer metrics; it needs better ones—metrics that align with public-interest goals and are robust enough to support comparison, learning, and improvement. In that sense, impact measurement is not separate from fact-checking. It is part of fact-checking’s ethical responsibility.
The future of the field will likely depend on its ability to prove value in ways that matter to journalists, funders, educators, and researchers. That is why the most credible organizations will be those that track influence over time, publish their reporting standards, and continuously refine their methods. As the ecosystem evolves, the question will not be whether fact-checking gets attention, but whether it earns lasting trust and produces measurable public benefit.
For readers exploring adjacent systems of evaluation and workflow design, the lessons from productivity impact measurement, workflow automation choices, and agentic search and discovery show that rigorous measurement is always a design problem. Fact-checking should be no different.
Related Reading
- State of the Fact-Checkers: Audiences grow as finances worsen - A useful snapshot of the field’s growth, pressures, and operational realities.
- Fact-checking is crucial when the news moves this fast - A timely reminder of why speed and verification must work together.
- Using AI for PESTLE: Prompts, Limits, and a Verification Checklist - A helpful model for structured verification and methodological discipline.
- Niche Sponsorships: How Toolmakers Become High-Value Partners for Technical Creators - Relevant for understanding sustainable partnerships and value signaling.
- Use Conversion Data to Prioritize Link Building: A CRO-Driven Outreach Framework - A strong example of outcome-oriented measurement strategy.
FAQ: Measuring the Academic Impact of Fact-Checking Work
1. Why are pageviews not enough to evaluate fact-checking?
Pageviews measure exposure, not influence. A fact-check can have modest traffic and still affect policy, scholarship, or teaching. To understand true value, organizations need to measure downstream outcomes such as citations, adoption, and correction effects.
2. What is the best indicator of policy influence?
The best indicator is a direct citation or substantive reuse in an official policy document, hearing, memo, or regulatory text. Informal mentions matter too, but they should be weighted lower than formal institutional incorporation.
3. How can fact-checkers track academic citations?
They can use Google Scholar, Crossref, institutional repositories, and discipline-specific databases. It is important to search for article titles, URLs, organization names, author names, and key claim phrases, then code the citations by purpose and discipline.
4. Can misinformation reduction actually be measured?
Yes, but carefully. Researchers can use surveys, experiments, interrupted time series, and platform analytics to estimate changes in belief, sharing, and persistence. Causality is difficult, so organizations should report findings conservatively.
5. What should fact-checking organizations include in their reporting standards?
At minimum: claim type, topic category, publication date, evidence base, intended audience, and any tags indicating policy relevance or classroom suitability. These fields make later impact tracking possible and improve transparency.
6. How should small fact-checking teams start?
Start with a simple claim taxonomy and a spreadsheet that tracks policy mentions, academic citations, and educational use. Even lightweight systems can produce valuable evidence if they are used consistently and reviewed on a regular schedule.
Related Topics
Avery Coleman
Senior SEO Content Strategist
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Financial Literacy for Prospective International Students: A Curriculum to Avoid Debt Traps
Beyond Clicks: Sustainable Funding Models for Fact-Checking Organizations
Trafficking or Recruitment? Protecting International Students from Predatory Agents
Reality TV and Academic Attention: Analyzing the Psychology Behind Viewer Engagement
Filmmaking as Research: Understanding Personal Trauma in Narrative Construction
From Our Network
Trending stories across our publication group