HomeMy HighlightsDiscover
Sign up
Kazuki

Kazuki

@kazuki

Ask AI Clone

Cofounder of Glasp. I collect ideas and stories worth sharing 📚

San Francisco, CA

Joined Oct 9, 2020

1073

Following

5839

Followers

1.47k

pages

13.59k

highlights

172.10k

views
M
W
F
Mar
Apr
May
All/Weekly/Yearly
Total Days:
Total Weeks:
823-Days 📚
154-Weeks 📚
Tags
Domains
AI 159
Learning 93
Founder 84
Knowledge 84
Growth 82
Startup 74
Product Development 71
Founding Story 71
Reading 67
Psychology 58
Human Behavior 58
Self-improvement 55
Life Lessons 54
Glasp 46
Note-taking 46
Mindset 44
Startup Idea 38
PMF 35
Knowledge Management 34
LLMs 32
Economics 32
Consumer App 31
Mission 31
Purpose 30
Community 28
Writing 28
Search Engine 27
Metrics 27
Curator Economy 26
Highlight 25
Thinking 25
Advice 24
Curation 24
VC 23
Productivity 21
Culture 21
Creator Economy 20
Crypto 20
Marketing 20
Motivation 20
Tools for Thought 19
Creativity 18
Business Model 18
Web3.0 17
Decision-Making 17
PM 17
YC 16
Fundraise 16
Social Token 15
Education 15
UX 15
PG 15
Investment 14
Quotes 14
Design 13
Social 13
Habit 12
Innovation 12
Leadership 12
Philosophy 11
Twitter 11
Branding 11
NFT 11
Memory 11
Future of Work 11
Network Effect 11
Equity 10
Finance 10
Strategy 9
ML 9
Social Media 9
User Acquisition 9
Career 9
a16z 9
Cognitive Science 8
Investor 8
Co-founder 8
Retention 8
Media 8
SEO 7
TED Talk 7
Collect 7
Blockchain 7
DAO 7
Gamification 7
Annotation 7
CAC 7
Publication 7
Neuroscience 6
Content Marketing 6
Moat 6
Chrome Extension 6
Viral 6
500ish 5
Accelerator 5
Human History 5
Copyright 5
Social Annotation 5
Ideation 5
Marginalia 5
HR 5
Sociology 5
Storytelling 5
Pitch 4
Mental Model 4
Gen Z 4
Consumer Trends 4
Edtech 4
Steve Jobs 4
Personal Development 4
Burnout 4
Communication 4
AI Agents 4
Product Hunt 4
Legal 4
Agency 4
Tokenomics 4
Marketplace 4
Technology 3
Vector Database 3
Workflow 3
Health 3
Book 3
Hiring 3
Organization 3
Ethereum 3
GTM 3
Market Research 3
Engagement 3
Algorithm 3
Advisor 2
Autonomous Vehicles 2
Problem Solving 2
Wellness 2
Remote 2
Collaboration 2
Core Action 2
Intelligence 2
Meditation 2
Elon Musk 2
Onboarding 2
AGI 2
Defensibility 2
Pre-seed 2
Corporate Governance 2
Analytics 2
Kudos 2
Personalization 2
Life 2
Luck 2
Newsletter 2
Content Discovery 2
Audio 2
Market Size 2
Pitch Deck 2
Relationship 2
Growth Hacking 2
Cold Email 2
BtoB 2
Email Marketing 2
Coding 1
Warren Buffett 1
Literacy 1
Debate 1
Cognitive Bias 1
Vision 1
Push Notification 1
Public Policy 1
Negotiation 1
Hearing 1
M&A 1
Visa 1
CTO 1
Teaching 1
HCI 1
Discussion 1
Prompt 1
Pivot 1
Information 1
Curiosity 1
Longevity 1
Salary 1
Time Management 1
AI Safety 1
Security 1
Ecosystem 1
Nature 1
Digital Garden 1
IP 1
International Growth 1
Typography 1
Biography 1
Commerce 1
Child Mortality 1
Reading List 1
Language Learning 1
Design Thinking 1
Glasp Talk 1
Interpretability 1
Quantum Computing 1
Child Development 1
Freemium 1
Swift 1
Meme 1
Cultural Insights 1
Text Embeddings 1
History 1
Word of Mouth 1
Insurance 1
Governance 1
BtoC 1
Independence 1
Climate Change 1
Naval 1
Digital Legacy 1
Show All

Atomic Graph

Highlights
Favorite
Kindle
Video
Bookmarks
Hatches
Posts

The Urgency of Interpretability

www.darioamodei.com/post/the-urgency-of-interpretability

AI
AI Safety
Interpretability

May 16, 2025

162

OpenAI’s Sam Altman on Building the ‘Core AI Subscription’ for Your Life - YouTube

www.youtube.com/watch?v=ctcMA6chfDY

AI
Startup

May 15, 2025

11

What is the most popular language to study in your country?

blog.duolingo.com/which-countries-study-which-languages-and-what-can-we-learn-from-it/

Language Learning
Learning
Cultural Insights

May 13, 2025

82

The Reading Obsession

www.enterlabyrinth.com/p/the-reading-obsession

Reading
Warren Buffett
Knowledge

May 12, 2025

4

The New Moat: Memory

www.newinternet.tech/p/the-new-moat-memory

AI
Memory
Personalization
Network Effect

May 7, 2025

83

Who is writing the story of your life?

read.glasp.co/p/who-is-writing-the-story-of-your

Philosophy
Self-improvement
Psychology

Apr 30, 2025

82

Snowflake CEO Frank Slootman explains why your company priorities are wrong

x.com/StartupArchive_/status/1916822616012136588

Startup
Founder
Decision-Making

Apr 29, 2025

41

Human Curiosity in the Age of AI

read.glasp.co/p/human-curiosity-in-the-age-of-ai

AI
Curiosity
Learning

Apr 25, 2025

63

The Data Wars & Reimagining Your Product During a Platform Shift

www.implications.com/p/the-data-wars-and-reimagining-your

AI
Human Behavior
Personalization
UX

Apr 25, 2025

157

Productivity

blog.samaltman.com/productivity/

Productivity
Life Lessons
Self-improvement

Apr 22, 2025

155

How to calculate CAC Payback Period (the right way)

www.mostlymetrics.com/p/how-to-calculate-customer-acquisition

Finance
Metrics
CAC

Apr 22, 2025

101

How to Calculate Net Dollar Retention Rate (the right way)

www.mostlymetrics.com/p/how-to-calculate-net-dollar-retention

Finance
Metrics

Apr 22, 2025

4

Use This 3-Step System to Get More Out of Every Book You Read

read.glasp.co/p/use-this-3-step-system-to-get-more

Reading
Note-taking

Apr 18, 2025

53

Introducing GPT-4.1 in the API

openai.com/index/gpt-4-1/

AI
LLMs

Apr 14, 2025

8

Contrary Research Rundown #131

contraryresearch.substack.com/p/contrary-research-rundown-131

AI
Autonomous Vehicles
Economics

Apr 14, 2025

154

Every marketing channel sucks right now

andrewchen.substack.com/p/every-marketing-channel-sucks-right

Growth
Marketing

Apr 9, 2025

155

"The Psychology of Human Misjudgment"

jamesclear.com/great-speeches/psychology-of-human-misjudgment-by-charlie-munger

Psychology
Human Behavior
Decision-Making
Cognitive Bias

Apr 6, 2025

114

The End of Reading

www.theringer.com/podcasts/plain-english-with-derek-thompson/2025/02/28/the-end-of-reading

Reading
Literacy

Apr 3, 2025

2

The Cybernetic Teammate

www.oneusefulthing.org/p/the-cybernetic-teammate

AI
Future of Work
Collaboration

Mar 29, 2025

93

Using AI to make teaching easier & more impactful

www.oneusefulthing.org/p/using-ai-to-make-teaching-easier

AI
Education

Mar 29, 2025

161

The original growth hacker reveals his secrets | Sean Ellis (author of “Hacking Growth”) - YouTube

www.youtube.com/watch?v=VjJ6xcv7e8s

Growth Hacking
Growth
PMF

Mar 28, 2025

194

How To Validate PMF EffectivelyㅣSean Ellis, Hacking Growth - YouTube

www.youtube.com/watch?v=TDBXoIiYFdQ

Growth
Growth Hacking
Startup

Mar 28, 2025

135

How to Be The Hero of Your Story - Jason Feifer

www.jasonfeifer.com/story-of-confidence/

Self-improvement
Storytelling

Mar 25, 2025

41

Measuring AI Ability to Complete Long Tasks

metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/

AI

Mar 20, 2025

82

Reading Well

map.simonsarris.com/p/reading-well

Reading
Self-improvement

Mar 16, 2025

74

Pure Independence

collabfund.com/blog/pure-independence/

Life Lessons
Independence
Purpose

Mar 11, 2025

187

The Most Precious Resource is Agency

map.simonsarris.com/p/the-most-precious-resource-is-agency

Agency
Education
Child Development
Learning

Mar 8, 2025

83

How to be More Agentic

usefulfictions.substack.com/p/how-to-be-more-agentic

Agency
Self-improvement
Burnout
Learning

Mar 4, 2025

93

Andrej Karpathy on X: "Agency > Intelligence I had this intuitively wrong for decades, I think due to a pervasive cultural veneration of intelligence, various entertainment/media, obsession with IQ etc. Agency is significantly more powerful and significantly more scarce. Are you hiring for agency? Are" / X

x.com/karpathy/status/1894099637218545984/

AI
Intelligence
Agency

Mar 3, 2025

41

Build for Tomorrow | Jason Feifer | Talks at Google - YouTube

www.youtube.com/watch?v=MhVZTzMy-BA

Life Lessons
Self-improvement
Motivation

Mar 3, 2025

204

Mess Up? Use It to Your Advantage - Jason Feifer

www.jasonfeifer.com/how-failure-builds-trust/

Psychology
Life Lessons

Mar 3, 2025

72

The future of the internet is likely smaller communities, with a focus on curated experiences

www.theverge.com/press-room/617654/internet-community-future-research

Community
AI
Consumer Trends

Feb 27, 2025

72

Why Are We Building Glasp?

read.glasp.co/p/why-im-building-glasp

Glasp
Knowledge
Mission

Feb 26, 2025

125

The Dam Has Burst.

multitudes.weisser.io/p/the-dam-has-burst

Motivation
Culture
Human Behavior

Feb 25, 2025

31

Barking in Public

investing101.substack.com/p/barking-in-public

Life Lessons
Human Behavior

Feb 24, 2025

73

The Wrath of Reading & Writing

investing101.substack.com/p/the-wrath-of-reading-and-writing

Reading
Writing

Feb 24, 2025

93

On Writing

investing101.substack.com/p/on-writing

Writing
Habit

Feb 24, 2025

51

2024 in Books

investing101.substack.com/p/2024-in-books

Reading
Education

Feb 24, 2025

22

Outmaneuvering Friction, Stages of Agents, & Gamification of Everything

www.implications.com/p/outmaneuvering-friction-stages-of

AI
Product Development
Human Behavior

Feb 21, 2025

94

How to Tell Your Story the Right Way - Jason Feifer

www.jasonfeifer.com/sharing-something-personal-purposefully/

Storytelling
Communication

Feb 21, 2025

81

Evergreen Over Ephemeral | Glasp

glasp.co/posts/e387a4bb-4be1-4cbd-ba2b-cfbe5d25920c

Glasp
Knowledge
Social Media

Feb 18, 2025

41

Why Blog If Nobody Reads It?

andysblog.uk/why-blog-if-nobody-reads-it/

Writing

Feb 18, 2025

62

Richard Feynman's Letter on What Problems to Solve

fs.blog/richard-feynman-what-problems-to-solve/

Life Lessons
Quotes

Feb 13, 2025

31

Internal Meeting at Apple: "Make something wonderful" | Steve Jobs Archive

putsomethingback.stevejobsarchive.com/internal-meeting-at-apple

Steve Jobs

Feb 13, 2025

2

Put Something Back | Steve Jobs Archive

putsomethingback.stevejobsarchive.com/

Steve Jobs

Feb 13, 2025

1

The Growth Maze vs The Idea Maze

andrewchen.substack.com/p/the-growth-maze-vs-the-idea-maze

AI
Growth

Feb 11, 2025

134

The Yerkes-Dodson Law of Arousal and Performance

www.simplypsychology.org/what-is-the-yerkes-dodson-law.html

Psychology

Feb 11, 2025

5

Three Observations

blog.samaltman.com/three-observations

AI
AGI
Future of Work

Feb 10, 2025

133

Introducing deep research

openai.com/index/introducing-deep-research/

AI
AI Agents

Feb 6, 2025

72

122

The Urgency of Interpretability

URL
https://www.darioamodei.com/post/the-urgency-of-interpretability
1
Tag
AI
AI Safety
Interpretability

Highlights & Notes

the progress of the underlying technology is inexorable, driven by forces too powerful to stop, but the way in which it happens—the order in which things are built, the applications we choose, and the details of how it is rolled out to society—are eminently possible to change, and it’s possible to have great positive impact by doing so. We can’t stop the bus, but we can steer it.

When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does—why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate.

Many of the risks and worries associated with generative AI are ultimately consequences of this opacity, and would be much easier to address if the models were interpretable.

Our inability to understand models’ internal mechanisms means that we cannot meaningfully predict such behaviors, and therefore struggle to rule them out; indeed, models do exhibit unexpected emergent behaviors, though none that have yet risen to major levels of concern.

For example, one major concern is AI deception or power-seeking. The nature of AI training makes it possible that AI systems will develop, on their own, an ability to deceive humans and an inclination to seek power in a way that ordinary deterministic software never will; this emergent nature also makes it difficult to detect and mitigate such developments

there are a huge number of possible ways to “jailbreak” or trick the model, and the only way to discover the existence of a jailbreak is to find it empirically. If instead it were possible to look inside models, we might be able to systematically block all jailbreaks, and also to characterize what dangerous knowledge the models have.

There are other more exotic consequences of opacity, such as that it inhibits our ability to judge whether AI systems are (or may someday be) sentient and may be deserving of important rights. This is a complex enough topic that I won’t get into it in detail, but I suspect it will be important in the future.

we quickly discovered that while some neurons were immediately interpretable, the vast majority were an incoherent pastiche of many different words and concepts. We referred to this phenomenon as superposition, 7 7 The basic idea of superposition was described by Arora et al. in 2016, and more generally traces back to classical mathematical work on compressed sensing. The hypothesis that it explained uninterpretable neurons goes back to early mechanistic interpretability work on vision models. What changed at this time was that it became clear this was going to be a central problem for language models, much worse than in vision. We were able to provide a strong theoretical basis for having conviction that superposition was the right hypothesis to pursue.

and we quickly realized that the models likely contained billions of concepts, but in a hopelessly mixed-up fashion that we couldn’t make any sense of. The model uses superposition because this allows it to express more concepts than it has neurons, enabling it to learn more. If superposition seems tangled and difficult to understand, that’s because, as e

User
Interesting...

we employed a method called autointerpretability —which uses an AI system itself to analyze interpretability features—to scale the process of not just finding the features, but listing and identifying what they mean in human terms.

Finding and identifying 30 million features is a significant step forward, but we believe there may actually be a billion or more concepts in even a small model

All of this progress, while scientifically impressive, doesn’t directly answer the question of how we can use interpretability to reduce the risks I listed earlier.

Our long-run aspiration is to be able to look at a state-of-the-art model and essentially do a “brain scan”: a checkup that has a high probability of identifying a wide range of issues including tendencies to lie or deceive, power-seeking, flaws in jailbreaks, cognitive strengths and weaknesses of the model as a whole, and much more.

we could have AI systems equivalent to a “country of geniuses in a datacenter” as soon as 2026 or 2027. I am very concerned about deploying such systems without a better handle on interpretability. These systems will be absolutely central to the economy, technology, and national security, and will be capable of so much autonomy that I consider it basically unacceptable for humanity to be totally ignorant of how they work.

Interpretability gets less attention than the constant deluge of model releases, but it is arguably more important. It also feels to me like it is an ideal time to join the field: the recent “circuits” results have opened up many directions in parallel.

Interpretability is also a natural fit for academic and independent researchers: it has the flavor of basic science, and many parts of it can be studied without needing huge computational resources. To be clear, some independent researchers and academics do work on interpretability

Powerful AI will shape humanity’s destiny, and we deserve to understand our own creations before they radically transform our economy, our lives, and our future.

User
Yes