· 5 min read
Last year with AI | 2025
A year of building with AI tools, creating accessible solutions, and learning to balance excitement with responsibility.
Looking Back at a Transformative Year
2025 was the year I went all-in on AI tools, and it’s been both exhilarating and humbling. I’m optimistic about what AI enables, but I’m also more cautious than I was.
The Tools That Changed My Workflow
Cursor became my daily driver when working on my personal computer. It’s not hyperbole to say it changed how I code. The autocomplete is uncanny, the inline edits save hours, and the ability to chat with my codebase feels feels great, when it works. When it doesn’t, it’s a reminder that we’re still in early days. My prompts can only get me so far. I still enjoy reading and writing code, without AI.
Claude has become an essential thought partner in my professional growth. I’ve used Claude to refine how I position technical work—shifting from process-focused frameworks to business impact narratives that resonate. Claude has helped me develop strategic project concepts and create accessibility-focused tools. Beyond the tactical work, Claude has been invaluable for exploring emerging AI capabilities in UI/UX design, helping me stay ahead of industry trends while maintaining the human-centered approach that defines my work. Claude has accelerated my ability to think strategically and communicate effectively—all while keeping my authentic voice intact.
ryOS is something you might not have heard of. It’s the vibe-coded Mac-inspired OS from Ryo, the head of design at Cursor. This is the AI that I’ve had the most fun with over the year. It was cool to watch the OS grow with features that made me smile. It’s a bit notosglic for me, considering I’ve been a full time Mac user since high school. My first computer was a 233mhz G# PowerPC in 97. Back to ryOS, it’s engaging, open, creative, and has so much potential. I feel the conversations with ryOS are more real than my convos with Claude, for real. :)
Building Read Easy
The highlight of my year was building Read Easy, an AI agent designed to help people with dyslexia access digital content more easily. Using AI to make the web more accessible felt purposeful—using technology to genuinely help the 10-15% of people who struggle with reading online.
It’s still in beta, but the feedback from people who actually have dyslexia has been encouraging. Building something that matters to real people, that makes their lives easier, that’s the use of AI I want to see more of.
Learning and Community
I attended an Ethan Mollick talk this year, which was eye-opening. He’s the author of Co-Intelligence: Living and Working with AI. I enjoyed the book when it came out (I hope he has another in the works). His perspective on treating AI as a collaborator rather than a tool or a threat resonated. It’s not about AI replacing us; it’s about figuring out how to work with AI in ways that amplify what makes us human. He’s also a great speaker. Live coding was great!
I also dove into courses on building with the Vercel AI SDK, which opened up new possibilities for creating agents and interactive experiences. The community around AI development is vibrant, but I’m conscious of how fast things move. What’s cutting-edge today is legacy tomorrow.
What I’m Not Saying
I’m deliberately not talking about AI at work. That’s a boundary I’ve set. The tools I use personally, the experiments I run, the agents I build—those are fair game. But my employer’s use of AI, the strategies we’re exploring, the challenges we’re facing? That stays internal.
The Cautious Part
Here’s where I get real: AI is producing a lot of slop. Low-quality content, derivative work, and solutions that look right but fall apart under scrutiny. I’ve contributed to that myself—early iterations of things, experiments that shouldn’t have seen the light of day.
I’m trying to be more intentional. Not every problem needs an AI solution. Not every tool needs an agent. Sometimes the best code is the simplest code, written by a human who understands the context and consequences.
AI is powerful, but it’s not magic. It’s a tool, an incredibly capable one, but still just a tool. It amplifies our abilities, but it also amplifies our biases, our blind spots, and our mistakes.
Moving Forward
As we head into 2026, I’m carrying these lessons with me:
Use AI for augmentation, not automation. The best results come when I’m actively involved, not when I’m delegating everything to a model.
Build things that matter. Read Easy matters. Random side projects that no one will use? Maybe not so much.
Verify, always. AI is confident even when it’s wrong. Trust, but verify. Then verify again.
Keep learning. The landscape changes weekly. What I know today might be obsolete tomorrow, and I need to be okay with that.
Stay human. At the end of the day, technology should serve people, not the other way around. If AI isn’t making someone’s life better, why are we building it?
Gratitude
To everyone building accessible, thoughtful, human-centered AI tools: thank you. To the people who’ve given feedback on Read Easy: thank you. To the community sharing knowledge, asking hard questions, and pushing back when things go too far: thank you.
2025 was a year of experimentation, learning, and building. Some of it worked, some of it didn’t, but all of it taught me something.
Here’s to a 2026 where we use AI to build things that actually help people, where we’re honest about limitations, and where we remember that the goal isn’t to replace humanity—it’s to empower it.
What were your experiences with AI this year? I’d love to hear what you learned: robertfauver@gmail.com
