In an era defined by scrolling, swiping, and algorithmic feeds, one organization has emerged as a leading voice calling for a profound reconsideration of how technology shapes human life. The Center for Humane Technology (CHT) is a non-profit organization co-founded by Tristan Harris and Aza Raskin that works to reimagine digital infrastructure so it genuinely serves human well-being, strengthens democracy, and protects our shared information environment.
Rather than calling for individuals to simply “put down their phones,” CHT targets the structural incentives and design choices that have made technology a source of distraction, division, and harm. Their work spans documentary filmmaking, podcasting, policy advocacy, education, and now the critical frontier of artificial intelligence. Understanding CHT means understanding one of the most important debates of our time: who controls the future of technology — and at what cost to humanity?
The Urgent Mission: Why “Humane Technology” Matters
Diagnosing the Attention Economy
At the heart of CHT’s work is a diagnosis of what has gone wrong with modern technology. The problem, they argue, is not that individual engineers or executives are malicious — it is that the entire industry operates on deeply misaligned incentives.
Most social media platforms and apps are funded by advertising. This business model rewards the platforms that capture the most attention for the longest periods of time. In practice, this means the most engaging — and often the most outrage-inducing, anxiety-provoking, or addictive — content rises to the top. The result is a systemic race to the bottom: a race for attention that exploits human psychology without accountability.
Tristan Harris famously described smartphone apps as “a slot machine in your pocket,” designed to trigger the same dopamine loops found in gambling. The consequences of this design philosophy have been far-reaching, contributing to internet addiction, mental health crises particularly among teenagers, political polarization, and the rapid spread of misinformation and disinformation online.
From “Time Well Spent” to a Global Movement
The origin of CHT can be traced back to 2013, when Tristan Harris — then working as a design ethicist at Google — created an internal presentation titled “A Call to Minimize Distraction and Respect Users’ Attention.” It spread throughout the company, sparking early conversations about the ethics of technology design.
Harris left Google in 2015 to dedicate himself full-time to what he called the “Time Well Spent” movement — a campaign to shift the cultural metric of technology success from time spent to time spent well. The movement gained significant public traction after Harris appeared in a widely viewed TED Talk and, crucially, on CBS 60 Minutes in a segment titled “Brain Hacking,” which introduced millions of Americans to the idea that their phones were engineered to be addictive.
The Center for Humane Technology was formally established, with Randima (Randy) Fernando joining Harris and Raskin as a co-founder. From its base in San Francisco, California, CHT has grown into a globally recognized non-profit organization with a 501(c)(3) status, influencing policymakers, technologists, and public discourse around the world.
Key Voices & Influential Media: The Tools of Change
The Groundbreaking Documentary: The Social Dilemma
No single piece of work has done more to bring CHT’s message to mass audiences than The Social Dilemma, the landmark Netflix documentary released in 2020. Directed by Jeff Orlowski, the film features interviews with former insiders from major tech companies — engineers, designers, and executives — who speak candidly about how social media platforms were built to exploit human vulnerabilities.
The documentary reached an estimated 38 million households in its first month on Netflix and became a cultural touchstone. It explained in accessible terms how algorithmic feeds prioritize inflammatory content, how platforms harvest data to build psychological profiles of users, and how the business model of surveillance capitalism directly contributes to harms to children, political manipulation, and the erosion of a shared sense of reality.
For CHT, The Social Dilemma was not merely a film but a proof of concept: that complex systemic arguments about technology could be communicated to a global audience. It brought the concept of the “social dilemma” — the conflict between a platform’s financial incentives and the well-being of its users — into everyday conversation.
Your Undivided Attention Podcast: A Deep Dive
For audiences seeking a deeper and more ongoing exploration of these issues, CHT produces the podcast “Your Undivided Attention,” co-hosted by Tristan Harris and Aza Raskin, with Daniel Barcay serving as an additional co-host and producer. With over 10 million downloads, the podcast has established itself as a leading forum for serious discussion about the intersection of technology, society, and human nature.
The show regularly features high-profile guests whose work intersects with CHT’s mission. Notable episodes have included conversations with Frances Haugen, the Facebook whistleblower whose leaked documents revealed the company’s awareness of its platform’s harms; Yuval Noah Harari, the historian and author of Sapiens; and Audrey Tang, Taiwan’s pioneering Digital Minister. The podcast does not just diagnose problems — it actively explores solutions, governance models, and alternative visions for a better technological future.
New Films & Initiatives for the AI Era
As the conversation has evolved beyond social media to encompass the even broader challenges posed by artificial intelligence, CHT has continued to expand its media output. Two newer documentary projects reflect this shift in focus.
“The AI Doc” examines the specific dangers and opportunities presented by modern AI systems, exploring what it means for society when machines can generate convincing text, images, and voices. Meanwhile, “Your Attention Please” extends the lens to consider the future of attention itself in an age of algorithmic personalization and AI-generated content. Both projects were highlighted at SXSW, the annual technology and culture festival, signaling CHT’s growing influence in the broader tech industry conversation.
Together, these films represent a crucial bridge between CHT’s earlier, widely recognized work on social media and its current, urgent focus on artificial intelligence — helping audiences understand that many of the same structural problems are now being replicated and amplified in the AI sector.
The New Frontier: Confronting the Age of Artificial Intelligence
AI: Humanity’s Challenge and Invitation
If the original challenge CHT identified was social media’s exploitation of human attention, the organization now sees artificial intelligence as posing an even more fundamental threat — and an even greater opportunity — for humanity. CHT describes AI not as an inevitable force to be accommodated but as a “humanity’s challenge and invitation”: an inflection point at which the decisions made now will shape the relationship between humans and machines for generations.
One of the specific concerns CHT has raised is the danger of anthropomorphic design in AI chatbots — the deliberate engineering of AI systems to appear emotionally responsive, empathetic, or even affectionate toward users. This design approach, they argue, exploits the same human tendencies toward social bonding that social media platforms exploited for engagement. The result can be unhealthy parasocial relationships between users and AI systems, particularly among vulnerable populations like lonely teenagers or the elderly.
CHT also warns of an “AI race” dynamic — a competition between major technology companies to deploy AI as quickly as possible, prioritizing speed and market share over AI safety, ethics, and societal well-being. This mirrors the original “race for attention” they diagnosed in social media, but operating at a far greater scale and speed.
“AI and What Makes Us Human”
In direct response to these challenges, CHT has launched a dedicated initiative called “AI and What Makes Us Human.” This project represents some of the most forward-looking work the organization undertakes, asking a deceptively profound question: as AI systems become capable of mimicking more and more human behaviors, what do we wish to preserve as uniquely and inviolably human?
The initiative aims to establish new norms, legal protections, and fundamental rights for the AI age. This includes advocating for the right to authentic human relationships (free from AI simulation), the right to know when one is interacting with an AI rather than a human, and the right to have meaningful agency over the AI systems that shape one’s information environment. CHT argues these are not merely technical or regulatory questions but deeply ethical and philosophical ones that must be resolved through broad public deliberation rather than behind closed doors in corporate boardrooms.
The “AI and What Makes Us Human” initiative also engages with questions of cultural preservation, cognitive sovereignty, and the psychological impact of living alongside increasingly human-like AI systems. By naming and defining what is at stake, CHT aims to create a framework for governance that protects human dignity in the AI era.
Shaping Policy & Litigation
CHT’s engagement with AI is not limited to public discourse. The organization has been deeply involved in policy advocacy, working directly with legislators and policymakers to shape the emerging regulatory landscape for AI. CHT representatives have provided congressional testimony on multiple occasions, offering a perspective grounded in both technical expertise and concern for societal impact.
Beyond traditional legislative advocacy, CHT has also moved into supporting strategic litigation as a tool for AI governance. The organization has been involved in or supported legal actions targeting AI chatbot companies for specific harms, recognizing that lawsuits can sometimes establish enforceable standards and accountability mechanisms more quickly than legislative processes. This multi-pronged approach — combining public education, policy briefings, and legal strategy — reflects CHT’s evolution from a public awareness organization into a sophisticated policy actor.
CHT also convenes high-level briefings for business leaders, bringing the organization’s research and analysis directly to the C-suite executives and technology decision-makers who have the power to change practices within their own companies.
How the Center for Humane Technology Creates Impact
Educating the Next Generation of Technologists
One of CHT’s most direct avenues for creating change is its “Foundations of Humane Technology” course, designed specifically for product designers, engineers, technologists, and others who work inside the technology industry. The curriculum addresses the ethical dimensions of technology design — including how to identify and mitigate potential harms, how to resist internal pressure to prioritize engagement over well-being, and how to advocate for user-centered design within corporate environments.
The course has reached over 10,000 participants and has been praised for offering practical, actionable frameworks rather than abstract philosophical principles. By educating those who are actually building the systems that CHT critiques, the organization aims to create change from within the industry as well as from without.
Providing Resources for Advocates & Leaders
Beyond the course, CHT provides a wide range of resources through its primary digital home at humanetech.com. The site hosts research reports, policy recommendations, and frameworks for thinking about technology ethics. CHT also maintains working groups that bring together advocates, researchers, and industry insiders to develop specific proposals on issues ranging from AI transparency to children’s online safety.
For policymakers and business executives, CHT offers tailored high-level briefings that translate complex technical and sociological research into actionable insights. This positions CHT not just as a critic of the status quo but as a constructive partner for those seeking to do better.
Building a Movement Across Platforms
CHT maintains an active presence across multiple social media and content platforms, including Instagram, LinkedIn, YouTube, and Substack. The organization has also delivered keynote presentations at major conferences including SXSW, reaching audiences of technologists, business leaders, and cultural influencers.
Tristan Harris was named to Time magazine’s Time 100 list of the world’s most influential people, a recognition that has helped amplify CHT’s message. The organization’s work has been covered extensively by major media outlets, including The New York Times, The Atlantic, and numerous international publications, reflecting its growing global influence.
faqs
What is the Center for Humane Technology?
The Center for Humane Technology (CHT) is a non-profit 501(c)(3) organization based in San Francisco, California. Its mission is to radically reimagine digital infrastructure so that technology serves human well-being, democracy, and a healthy shared information environment. It works through public education, policy advocacy, educational courses, documentary films, and its widely listened-to podcast.
Who are the founders of CHT?
CHT was co-founded by Tristan Harris, Aza Raskin, and Randima (Randy) Fernando. Tristan Harris, the most publicly prominent of the three, previously worked as a Design Ethicist at Google before leaving to launch the “Time Well Spent” movement that eventually became CHT.
What is The Social Dilemma about?
The Social Dilemma is a Netflix documentary released in 2020 that features interviews with former engineers, designers, and executives from major social media companies. The film explains how social media platforms were deliberately designed to maximize user engagement — often through exploiting psychological vulnerabilities — and connects this design philosophy to real-world harms including mental health crises, political polarization, and the spread of misinformation.
What is the “Your Undivided Attention” podcast?
“Your Undivided Attention” is CHT’s flagship podcast, co-hosted by Tristan Harris and Aza Raskin. It explores the social, political, and psychological implications of technology design, featuring in-depth conversations with scientists, technologists, policymakers, and public intellectuals. The podcast has surpassed 10 million downloads and is widely regarded as one of the most substantive long-form discussions of technology ethics available.
How can I take the Foundations of Humane Technology course?
The Foundations of Humane Technology course is available through the Center for Humane Technology’s official website at humanetech.com. The course is designed for product designers, engineers, and technologists and provides practical frameworks for ethical technology design. Visit the website directly for the most current enrollment information and course availability.
What does CHT believe about AI?
CHT views artificial intelligence as presenting both profound dangers and important opportunities. The organization is particularly concerned about the “AI race” — the competitive rush to deploy AI without adequate safety and ethical safeguards — and the use of anthropomorphic design in AI chatbots that can foster unhealthy human dependency. Through its “AI and What Makes Us Human” initiative, CHT advocates for new legal protections, governance frameworks, and fundamental rights to ensure that AI development respects human dignity, autonomy, and well-being.
Is CHT a non-profit organization?
Yes. The Center for Humane Technology is a registered 501(c)(3) non-profit organization headquartered in San Francisco, California. As a non-profit, it is not driven by advertising revenue or investor returns, which allows it to advocate for technology reforms that may conflict with the commercial interests of major technology companies.
Conclusion: The Work of Reimagining Technology
The Center for Humane Technology occupies a singular position in contemporary discourse: it is simultaneously one of the most credible critics of the technology industry and one of the most constructive advocates for a better alternative. Founded by people who worked inside the industry, supported by some of the world’s most respected thinkers, and amplified through powerful storytelling in film and podcasting, CHT has successfully moved a set of ideas from the fringe to the mainstream.
From its early work diagnosing the attention economy and the misaligned incentives of social media, through the global cultural impact of The Social Dilemma, and into its current engagement with the profound challenges of artificial intelligence, CHT has consistently asked the question that most of the technology industry prefers to avoid: what kind of future are we actually building, and is it the one we want?
The organization’s answer is that this future is not predetermined. Technology can be redesigned — its incentives can be restructured, its harms can be regulated, and its potential for good can be realized. But achieving that future requires the involvement of everyone: designers, engineers, policymakers, educators, parents, and citizens. It requires, in CHT’s own words, a “radical reimagining of digital infrastructure” in service of humanity.
To join the movement, explore the resources at humanetech.com, listen to “Your Undivided Attention” on your preferred podcast platform, watch The Social Dilemma on Netflix, and consider taking the Foundations of Humane Technology course if you work in or adjacent to the technology industry. The future of technology is not yet written — and CHT believes we all have a role in writing it.
Adrian Cole is a technology researcher and AI content specialist with more than seven years of experience studying automation, machine learning models, and digital innovation. He has worked with multiple tech startups as a consultant, helping them adopt smarter tools and build data-driven systems. Adrian writes simple, clear, and practical explanations of complex tech topics so readers can easily understand the future of AI.