The release of ChatGPT was my wake-up call. As a product manager, I saw both extraordinary potential and existential threat – could AI supercharge my capabilities or eventually replace me entirely? Throughout 2023 and 2024, I dove deep into the AI ecosystem: mastering tools, devouring blogs, consuming countless hours of content, and tracking every development. Yet despite having an AI assistant at my fingertips, I felt something was missing. The real transformation remained elusive.
That's when I decided to push beyond theory and into uncharted territory. Instead of just using AI as a helpful sidekick, I wanted to test its limits as a true product development partner. My goal wasn't to create another quick MVP – I wanted to build a production-grade web application that could handle real users and scale with demand. The challenge? Using AI to transform myself into a full-stack product creator: designer, developer, DevOps engineer, and data specialist all rolled into one.
Impossible? Maybe. Revolutionary? Definitely. Join me as I document this ambitious experiment in My Journal, where I'll discover if AI can truly empower product managers to break free from traditional constraints and reshape what's possible in product development.
My action plan
🤖 Inside the AI Alliance Agent Meetup: Bridging Industrial Expertise & Agent Innovation
Just returned from the AI Agent meetup in San Francisco with over 200 attendees! This new series hosted by the AI Alliance brought together some of the brightest minds in the agent space for demonstrations, discussions, and networking.
🏭 Industrial Enterprises & Agent Reliability
A fascinating revelation: 25% of AI Alliance members are Industrial Enterprises. The opening discussion highlighted a critical challenge:
🐝 BeeAI Framework Deep Dive
Witnessed an impressive live demonstration of the BeeAI framework that's tackling a growing challenge in the agent ecosystem:
🌊 LangFlow 1.3 Showcase
The LangFlow presentation unveiled their impressive 1.3 release with server capabilities and MCP connectivity:
🔍 Pattern Recognition:
The evening revealed a clear evolution in the agent ecosystem: we're moving from building individual agents to orchestrating agent collectives. The frameworks that enable reliable agent communication, coordination, and integration are becoming as important as the agents themselves.
Next up: Exploring how these multi-agent orchestration patterns might apply to product management workflows. Could a collection of specialized agents transform how we approach market research, user testing, and roadmap planning? The possibilities are expanding! 🚀
P.S. Made several valuable connections with fellow AI agent enthusiasts throughout the evening. The community's energy and collaborative spirit reminds me why in-person events remain irreplaceable, even in our increasingly virtual world.
🤖 AI Agents: The End of White-Collar Work As We Know It?
Just returned from #AIAgentWeek in San Francisco where the energy was electric—120+ innovators in the room (and 150 more on the waitlist!) sharing breakthrough insights that are fundamentally reshaping how we think about work, delegation, and automation.
Key takeaways that have me rethinking everything:
1️⃣ The paradigm is flipping:
2️⃣ Industry transformation is accelerating:
3️⃣ Agent architecture evolution:
4️⃣ Quality & trust mechanisms emerging:
5️⃣ UX transformation:
The consumer implications are fascinating: we'll increasingly delegate our digital identity to agents that act on our behalf across platforms. Event info on Luma.
What's your take? Are businesses ready for this shift? Are YOU ready?
Weekly Reads: AI Innovation & Industry News
📚 What I Read This Week
Business & Leadership
Technical Insights
Industry Moves
Cool Tech Developments
Media & Analysis
Ethical & Social Impact
Historical Context
What are you reading this week? Share your favorite AI news and insights with me on LinkedIn.
🚀 AI-Powered Startups: Inside Look at an Early Stage Company
Had a fascinating meeting with a founder via Y-Combinator founder matching today that provided real-world validation of how AI is transforming startup economics and product development approaches!
👥 Startup Staffing Revolution:
The founder is building a warehouse management system leveraging 17 years of industry experience, but with a radically different approach to engineering:
🔍 Product Design Transformation:
The AI influence extends deeply into how products are being conceptualized:
📈 Broader Industry Validation:
This single case study reflects a massive trend confirmed by YC managing partner Jared Friedman:
🔮 Pattern Recognition:
The democratization of software development is accelerating exponentially. Non-technical founders with domain expertise can now build sophisticated software products without assembling large engineering teams. The competitive advantage is shifting from "who can hire the most engineers" to "who understands the market problems most deeply."
🛠️ AI-Powered Development: From Marketing Scripts to Framework Adventures
Today was all about putting AI tools to work on real-world problems and expanding my technical horizons. The contrast between theoretical capabilities and practical implementation continues to fascinate!
📊 Windsurf + Claude Sonnet 3.7 Project Deep Dive:
Built a marketing utility for my brother's automotive business that showcases both the power and limitations of AI-assisted development:
🚀 Next.js Learning Journey:
Following advice from an engineering leader to build production-grade applications faster:
npx create-next-app@latest
pulled version 15.2.3 with incompatible Tailwind 4.0🔍 Pattern Recognition:
The velocity of tech frameworks presents a unique challenge: they move faster than educational content can keep pace. This suggests that understanding fundamental concepts may be more valuable than version-specific knowledge.
🚀 AI Models Leveling Up: Gemini 2.5 & OpenAI's Text Revolution
The AI race is accelerating, and I've been putting these tools through their paces! Today's deep dive reveals how these advancements are transforming the PM toolkit:
🔍 Model Exploration Highlights:
💡 Pattern Recognition: The 10x Professional Is Emerging
The integration of these tools across work and personal contexts is revealing a clear pattern:
🔮 Beyond Tech: Expanding Into Knowledge Work
Perhaps most fascinating is watching these tools transform traditionally human-centric domains:
The implications are profound: as these models continue improving, what other professional services will people begin consulting AI for first?
🗺️ Navigating the Evolving AI Landscape
The AI world continues to transform at breakneck speed! These past weeks have been a personal and professional whirlwind as I navigate the rapidly changing terrain of AI tools and capabilities.
🔊 Voice AI Revolution
OpenAI released next-generation speech-to-text and text-to-speech audio model APIs that significantly advance beyond last year's popular Whisper model. These developments are an opportunity to push my AI Voice Agent project in exciting new directions! I will be comparing how well OpenAI stacks up to ElevenLabs.
🛠️ My AI Toolkit Power Rankings:
📊 Performance Observations:
🔍 Key Pattern: Specialization Matters
The clear pattern emerging: success in the AI space isn't about being marginally better at everything, but significantly better at something specific. Each tool in my workflow serves a distinct purpose, creating a specialized ecosystem rather than a single solution. I see the same need arising for my AI Voice Agent, as there are so many proliferating!
Dealing with a family emergency... will be back to posting soon...
🎮 AI Coding Showdown: Asteroids Game Challenge
🤖 AI Model Comparison: Decided to stress-test the latest LLMs (Grok 3, Gemini 2.0, Claude 3.7) by building an Asteroids game! The results were enlightening:
🔍 Key Learning Moments:
💡 Strategy Discovery: When stuck in troubleshooting loops with one AI, switching to another model often provided fresh perspective and unblocked progress.
The quest for the perfect AI-generated Asteroids game continues! This exercise revealed both the impressive capabilities and current limitations of even the most advanced coding assistants. 🚀
🔥 AI Model Updates & Full Stack Database Dive
🤖 LLM Landscape Developments:
💻 Full Stack Progress: Deep dive into MongoDB with Part 3 of University of Helsinki's course:
🔍 Key Insight:
Even as AI takes over more coding tasks, understanding database selection, schema design, and infrastructure considerations remains crucial. The technology choices we make early create the foundation for future scaling!
🎉 Major Milestone: Production-Ready AI Voice Agent!
🛠️ Feature Development: Call Transfer System
Successfully implemented warm transfer capability
Process flow:
🧠 Multi-LLM Collaborative Coding Approach:
Initial attempt with Cline AI to build Call Transfer System:
Problem-solving process:
☁️ Production Deployment:
Cloud provider selection: Render
Implementation steps:
Result: 24/7 production-grade AI Voice Agent running in the cloud!
🎯 Pattern Recognition:
Next up: Testing with real users and scaling the system based on feedback. From concept to production in record time! 🚀
🛠️ Deep Dive: AI Voice Agent Development Day
💻 Technical Progress:
🔍 Platform Deep Dive - Vapi.ai Exploration:
Pros:
Challenges:
ElevenLabs Implementation: Successfully built CallerId capture middleware. Next feature: call transfer capability
🤔 Technical Questions Emerging:
🎯 Pattern Recognition:
Next up: Building the call transfer feature - enabling AI to seamlessly hand off calls to human operators. The journey from code to conversation continues! 🚀
🎯 LLM Bias Observations:
📜 AI Voice Agent Regulations:
🛠️ Voice Agent Development Progress:
🔍 Pattern Recognition:
🎯 Next Steps:
Looking ahead: The intersection of ethics, regulation, and technical development is creating interesting challenges in the AI voice space. Time to find creative solutions! 🚀
🚀 AI Platform Evolution & Startup Progress
📊 OpenAI's Market Dominance:
🤖 My Seven AI Assistant Ecosystem:
Pattern: Each tool has carved out its unique strength niche, and I capitalize on that in my use. Multiple tools also allow me to go past daily usage limits.
💼 Corporate AI Adoption Trends:
🎯 AI Voice Agent Startup Progress:
Market Research:
Operational Development:
🔍 Pattern Recognition:
Next up: Finalizing the landing page and defining the unique market position in the AI Voice Agent space. Sometimes the best differentiation comes from understanding what everyone else is doing and finding my own unique angle! 🚀
🔬 AI Evolution: From Chat to Scientific Discovery
🤖 Major Platform Update: Google's AI Co-scientist Launch
📜 OpenAI's Policy Shift to "uncensor" ChatGPT outlined on TechCrunch
📚 AI Research Explosion:
🛠️ Lovable AI Coding Tool Review:
Key Issues:
Decision: Subscription canceled due to ROI concerns, will revisit in the future - off to my further testing and use of Cursor & Cline
🎯 Pattern Recognition:
Next up: Exploring alternative AI coding tools with better economics and reliability. The rapid evolution in this space suggests better options are coming! 🚀
🚀 The AI Landscape: Rapid Evolution & Market Shifts
📊 LLM Competition Heats Up:
💻 The Future of Freelance Development:
📚 Academic Deep Dive Necessity:
Strong recommendations from three distinct sources to engage with scholarly AI research, to be an effective product leader:
🔍 Must-Read Papers:
Latest innovations Pro tip: Leverage LLMs to decode dense academic concepts!
🎯 Pattern Recognition:
📊 Tax Prep Meets AI: Insights from Personal Finance Day
🔍 Deep Dive into Tax Preparation:
Today was all about diving into personal tax preparation - a perfect real-world case study for AI disruption! The experience highlighted a fascinating divide: while data entry is ripe for automation, the strategic preparation process with all the paperwork required still requires careful human oversight.
💡 Key Observations:
🤖 AI Development Updates:
🎯 Pattern Recognition:
The tax preparation experience perfectly illustrates how AI is transforming professional services:
📊 Deep Work Day: From Tax Filing to AI Policy Insights
💼 E-commerce Business Operations - some tasks like tax filings still need to be tackled with traditional software, but LLMs are great advisors to speed up the process (and save thousands $$ from hiring professionals):
🌍 AI Policy Developments from Paris:
🔍 Pattern Recognition: Finding balance in AI governance
🤖 LLM Evolution & Full-Stack Adventures
🔄 ChatGPT 4o vs Claude: The AI Assistant Race Heats Up
💻 Cloud Deployment Deep Dive in University of Helsinki's Full Stack course part 3
Successfully deployed full-stack apps on two platforms:
Fascinating discovery: Production React apps undergo significant transformation
🛠️ Technical Revelations:
🚀 AI Startup Insights & Voice Agent Breakthrough
🎯 Sparklabs & Nex AI Startup Forum Highlights:
🎤 Voice Agent Prototype Success:
🔍 Pattern Recognition: Two powerful trends converging:
Next up: Diving into the verbosity issue while preparing for production deployment. The real learning begins when users start interacting with the system! 🚀
🧠 Deep Diving into LLMs: From Theory to Practice
📚 LLM Fundamentals Deep Dive:
🔄 AI Industry Dynamics:
🛠️ Hands-on Agent Building Progress:
🌉 SF Tech Scene Discovery:
🔍 Pattern Recognition: A clear evolution in the AI landscape:
🤖 Low-Code AI & Full-Stack Journey: Bridging Theory and Practice
🔧 AI Agent Building Adventures:
💻 Full Stack Development Progress:
🎯 Product Management Career Insights (ProductTank @ GitHub) with Vidur Dewan and Yasi Baiani executive recruiters as panelists:
🔍 Pattern Recognition: Two critical trends are emerging in the AI-powered product management landscape:
🚀 Backend Evolution & Voice Agent Insights
💻 Full Stack Progress: Making strides in Part 3 of University of Helsinki's Full Stack course:
🎙️ Voice Agent Deep Dive: The voice agent landscape is fascinating and complex:
🔍 Key Insight: While latency optimization is crucial, the immediate focus remains clear: validate product-market fit with low-code solutions first, then tackle scalability challenges. As they say, better to have a slow product that people want than a fast one they don't!
Next up: More backend development mastery and low-code agent prototyping! 🛠️
🔄 Backend Journey & Voice Agent Deep Dive
💻 Full Stack Progress: Diving into Part 3 of University of Helsinki's Full Stack course - Node.js territory! Each step brings me closer to understanding and customizing AI-generated code with confidence.
🎙️ Voice Agent Architecture Exploration: After extensive research into the voice agent landscape, a clear strategy emerged:
MVP Path:
Production Architecture:
🔍 Key Insight: Start simple, validate fast! While the full tech stack offers robustness and scale, proving market fit with low-code tools first is the smarter path forward.
Time to build that voice agent prototype! 🚀
🎯 Full Stack Milestone: Part 2 Complete!
💻 Technical Achievements: Conquered Part 2 of the Helsinki Full Stack course with a challenging final project:
🔍 Key Learning: The real magic happens client-side - keeping the UI responsive while managing asynchronous data flows is an art, especially for interactive AI based use cases like chat & agents! These patterns will be crucial for building AI-powered applications where user experience is king.
Next up: Part 3 beckons with server-side development! 🚀
🔄 Full Stack Journey & Mental Wellness
💻 Tech Progress: Diving deeper into University of Helsinki's Full Stack course Part 2! Today's wins:
🧘♂ Mental Wellness Discovery:
Found Michael Singer's work through an intriguing talk, LET IT GO! Surrender to Happiness. His book "The Untethered Soul" (41.8k Amazon reviews!) offers fresh perspectives on mental freedom. As a logic-driven technologist, I'm finding value in exploring different approaches to mental wellness - after all, isn't our mind's interpretation of circumstances what shapes our reality?
The path to becoming an AI-powered PM isn't just about technical skills - it's about growing holistically! 🚀
🎓 Deep Diving into Computer Use & Voice Agents
🤖 Computer Use Reality Check (DeepLearning.AI x Anthropic):
Today was eye-opening! Completed the Building Toward Computer Use with Anthropic course, and wow - we're definitely in the early days. The current state is both fascinating and humbling:
🎯 Enterprise Prompting Insights:
The gap between consumer and enterprise prompting is wider than I imagined! My key realizations:
🗣️ Voice Agent Architecture Deep Dive:
Spent hours mapping out voice agent architecture - it's a fascinating puzzle of moving parts:
🔍 Pattern Recognition: There's a clear divide between proof-of-concept tools and production-ready systems. Whether it's computer use or voice agents, the path from demo to scalable solution is where the real challenges emerge.
🎓 Deep Diving: From API Integration to Co-Founder Hunt!
Today was packed with learning and networking - exactly the kind of day that shows how theory and practice come together in the AI product space!
🔧 Technical Growth on Two Fronts:
🤝 Building the Foundation for an AI Startup:
🔍 Pattern Recognition: The more I learn, the clearer it becomes - successful AI product development needs both deep technical understanding and strong product intuition. Today reinforced that my alternating learning strategy (technical skills ↔️ product/business knowledge) is paying off!
Next up: Diving deeper into API integration patterns and continuing the co-founder search. The journey to building AI-powered products is getting more exciting each day! 🚀
🚀 AI Models, APIs, and Real-World Challenges
🤖 Big Tech's AI Race - Google's Gemini 2.0 Launch:
The AI landscape keeps evolving at breakneck speed! Google just dropped Gemini 2.0 with its Flash and Pro variants. As someone deep in the AI coding journey, I'm particularly excited about Gemini 2.0 Pro's enhanced coding capabilities. Time for some hands-on comparison with Claude to see which assistant better understands my coding style and needs. The real power might lie in knowing when to use which tool!
🔧 API Deep Dives & Cost Optimization -Making progress on my AI integration journey:
The parallel with cloud computing's evolution is fascinating - from basic hourly billing to spot pricing. Are we seeing the same pattern with AI pricing models? This batch processing approach feels like the beginning of more sophisticated pricing strategies.
📚 Engineering Excellence & Best Practices:
Diving into "The Pragmatic Programmer" while getting coding style guidance from AI assistants. Grok's introduction to PEP 8 style guide was particularly enlightening - there's something powerful about writing code that not only works but is also maintainable and readable. These fundamentals seem even more crucial when building AI-powered solutions.
🤝 Real-World Reality Check:
Had an eye-opening conversation with another founder building in the AI space for SMB customers. Key revelation: the technology piece might be the easier part! The real challenges lie in:
This validates my approach of building strong technical foundations while keeping the end user's perspective front and center. The best AI solution is worthless if users don't trust or understand it!
🎯 Next Steps: Balancing technical development with market research - need to find creative ways to reach and educate potential SMB users while continuing to refine my AI integration skills. Maybe it's time to explore some traditional marketing channels alongside the tech stack?
The journey of building AI-powered products is teaching me that success requires more than just great technology - it's about building bridges between cutting-edge capabilities and real-world user needs! 🚀
🔄 Full Stack Journey & AI Product Management Insights
🎓 React Forms Mastery: Finally conquered Forms in University of Helsinki's React course Part 2. Next up, backend coding! As someone whose comfort zone has been backend languages (Python and Perl /Java from college days), I'm fascinated by the upcoming frontend-backend interaction in the course including JSON data manipulation, and I'm curious how will JavaScript's approach compare to my familiar Python territory. Given how AI coding tools are heavily JavaScript-focused, mastering this ecosystem isn't just nice-to-have anymore - it's becoming essential for troubleshooting and extending AI-generated code.
🎯 AI Sales Revolution: Caught a mind-bending A16Z podcast today - "Death of a Salesforce" - and wow! As PMs, we often need to be Swiss Army knives, sometimes knowing even more than domain experts to effectively champion our products. The podcast revealed how AI is revolutionizing what seemed untouchable: the art of sales itself. From pinpoint prospect targeting to AI-powered cold calling, the transformation is going to be radical. It's not just about automation - it's about augmentation and precision that human-only approaches can't match.
🤖 Responsible AI, The PM's Ethical Compass: Here's a wake-up call: UC Berkeley's latest survey shows 77% of organizations struggling with responsible AI implementation. The responsibility diffusion is real, but as PMs, we're uniquely positioned to bridge this gap. Why does this matter? Because responsible AI isn't just about checking boxes - it's about building trust, ensuring compliance, and creating sustainable product value. The Berkeley playbook is clear: responsible practices = stronger brand + customer loyalty + risk management.
✨ Design-First AI Development: Here's a pro tip for leveraging AI coding tools: feed them design principles! As PMs obsessed with user experience, we can't let AI generate code in a design vacuum. I've been experimenting with using Dieter Rams' 10 principles as AI coding guardrails - the results are fascinating. Try this: identify your design hero and use their principles to guide your AI tools. It's like having a world-class designer reviewing every line of generated code!
🔍 Deep Research Tools & Developer Mindset Evolution
🤖 AI Research Tools Landscape: Gemini Deep Research has been my secret weapon for startup research, delivering comprehensive 10+ page reports that compress days of work into minutes. Now OpenAI is entering the arena with their own deep research tool named... you guessed it, OpenAI Deep Research (though it's a ChatGPT Pro exclusive for now). While I'm loyal to Gemini's impressive capabilities, competition in this space could push innovation even further. Watching this space closely!
👨💻 The Developer's Mind: Diving into "The Pragmatic Programmer - 20th Anniversary Edition" by David Thomas and Andrew Hunt has been eye-opening! Just 30 pages in, and I'm discovering a surprising parallel: developers and product managers share more DNA than I thought. The emphasis on:
These principles resonate deeply with my PM background, making the transition feel more natural than expected.
🚀 Full Stack Progress Report: Completed all the assignments in University of Helsinki's Full Stack course Part 2! Finally cracking the code on:
The learning curve has been manageable, but those sneaky syntax errors... 😅 Thank goodness for AI pair programming catching my missing parentheses when I'm lost in hundreds of lines of code! It's becoming clear that AI isn't just a coding assistant - it's more like a patient mentor pointing out the obvious things we sometimes miss in the complexity.🎯
Key Insight: Whether you're wearing a PM or developer hat, success comes down to understanding your tools, your users, and knowing when to ship versus when to refine. The worlds of product management and development aren't just overlapping - they're two sides of the same coin!
Next up: Diving deeper into React components and seeing how far I can push these newfound JavaScript skills! 🚀
🌊 The LLM Landscape: Shifting Tides & New Horizons
Today's deep dive into the evolving LLM ecosystem revealed some fascinating insights about where we're headed. The pace of innovation is becoming breathtaking!
🚀 Market Dynamics Shakeup: The DeepSeek launch is forcing us to recalibrate our assumptions about the AI race. With Chinese companies now potentially just 3-6 months behind their American counterparts (down from 9-12 months), the competitive landscape is intensifying. But here's the real kicker from the All-In Podcast this weekend: the future isn't about who owns the best LLM – it's about who builds the most compelling applications and communities around them.
💡 Key Market Insights:
🎓 Deep Learning Adventures: Completed the "Reasoning with o1" course by DeepLearningAI, and wow – it's clear we need to rethink our approach to these new reasoning models. The traditional prompting playbook needs a serious update!🛠️ New Prompting Paradigms:
🔍 Critical Realization: The chat interface is just scratching the surface. To truly harness o1's potential, coding proficiency isn't optional – it's essential. The API opens up possibilities that the chat interface simply can't match.
Next Steps: Time to deep dive into API implementation and start building some proof-of-concept applications. The future of AI product management clearly lies at the intersection of technical capability and strategic vision! 🚀
🚀 The AI-powered PM Revolution Is Here!
Today brought major validation and exciting developments in the AI-PM landscape. Let's break down the key developments:
💼 LinkedIn's PM Evolution Insights: The writing is on the wall...
Product Management is at the cusp of an AI revolution with 83% of PM's agreeing that AI will help to progress their career. LinkedIn's latest analysis confirms what many of us have sensed - PM roles are prime for AI disruption. But here's the interesting part: it's not about replacement, it's about evolution. As the lynchpin between customers and products, PMs who master AI tools will become exponentially more valuable. The message is clear: adapt and thrive, or risk falling behind.
🎯 Key Insight: The future belongs to PMs who can leverage AI to:
🔥 OpenAI's O3 Launch: Faster and better reasoning with new developer features.
After December's preview, O3 is finally here! As someone diving deep into the technical side of product management, I'm particularly excited about:
💻 Full Stack Journey Update: Continuing my mission to bridge the PM-Developer gap.
🔮 Looking Ahead. The convergence of AI capabilities and PM responsibilities is creating a new breed of product leader - one who can seamlessly blend strategic thinking with technical execution. As we navigate this transformation, the ability to understand both business needs and technical implementation becomes increasingly valuable.
🔍 AI Business Models & Market Dynamics: From Features to Bubbles
💡 AI Go-to-Market Deep Dive: Kate Syuma's session on AI feature adoption was eye-opening! Key patterns emerging in how successful companies monetize AI capabilities:
🤖 Custom Agents Revolution: Fascinating demo by Amit Rawal and Thiago Oliveira showcasing personalized ChatGPT agents! Their work points to a future where AI becomes your strategic thinking partner:
💭 Market Reality Check: Sequoia's analysis of the AI bubble raises some sobering questions. The numbers are staggering:
The DeepSeek LLM's efficiency gains hint at an interesting possibility: Are we overbuilding infrastructure again, or is this time truly different?
🎯 Key Takeaway: While we're clearly in a period of massive infrastructure investment, the path to monetization needs careful navigation. Success will likely come from thoughtful AI integration and clear value proposition, not just raw compute power.
What are you planning to build with AI?
🤖 AI-powered PM Adventures: From ML Debugging to Startup Horizons
🧠 Deep Learning Reality Check:
💭 Product Leadership in the AI Era:
🚀 Startup Journey Updates:
🔍 AI Development Tools Deep Dive:
Next steps: Diving into founder meetings while continuing to bridge the gap between theoretical ML knowledge and practical implementation. The journey of becoming an AI-powered PM is revealing new dimensions every day! 🌟
🤖 The Great LLM Race Heats Up:
💡 Industry Insight: The US-China AI race is intensifying, but here's the real winner - us! Open source models are also democratizing access to cutting-edge AI, driving down costs and boosting market optimism. Tech stocks are reflecting this reality, climbing as investors recognize the long-term profitability impact of cheaper AI infrastructure.
🎓 Personal Milestone: Completed University of Helsinki Full Stack Course Part 1! The pieces are finally clicking into place. Now I can approach tools like Lovable, Bolt, and V0 with a deeper understanding of React architecture, ready to level up my stock trading app project.
🔍 Key Learning: Understanding fundamentals (like React) transforms how we use AI tools - from blind reliance to strategic collaboration. The future belongs to those who can bridge both worlds!
Next up: Diving back into AI coding assistants with fresh eyes and stronger foundations. Let's see how much faster we can build with this new knowledge! 🚀
🚀 Full Stack Journey: Where React Meets AI
💻 React Deep Dive Progress:
🤖 AI Automation Insights (via a16z podcast):
🔍 DeepSeek R1 Experience (and the crazy $600B valuation drop of Nvidia stock):
🚀 Parallel Paths: Startup Validation & AI Technical Deep-Dives
💡 Startup Journey Acceleration:
🔍 Technical Foundation Building:
🎯 Pattern Recognition: The intersection of PM skills and startup validation is creating a unique advantage - using AI tools to rapidly test hypotheses across multiple ventures simultaneously.
Next challenge: Applying AI-powered velocity to determine which startup deserves full focus. Time to put those PM prioritization frameworks to the test!
🧠 Peak Performance: The Hidden Engine of AI Product Development
Today's deep dive into peak performance psychology offered crucial insights for sustaining the intense learning journey to become an AI-powered PM. Fascinating conversation between Jordan B. Peterson and Tony Robbins unveiled key principles that directly apply to our field:
💪 Performance Psychology Insights:
🔑 Key Applications for AI Product Managers:
The path forward is clear: sustainable high performance isn't just about motivation - it's about systematic energy management and crystal-clear purpose alignment. Time to apply these principles to my AI-powered PM development journey! 🚀
Diving deep into effective LLM prompting - the fastest path to AI-enhanced product management. Two standout learning experiences:
Patrick Neeman's UX/PM prompting masterclass showed impressive practical techniques. His new book, uxGPT, is already proving valuable in hands-on practice.
Mustafa Kapadia demonstrated how to personalize LLM responses by training them with company content and organizational context - brilliant for aligning AI outputs with business goals.
Both leaders are sharing cutting-edge prompting techniques - worth following! 🚀
🎯 AI Product Strategy & Engineering Deep Dives
Fascinating insights from today's webinars and learning material! Let's unpack:
💰 AI Pricing Evolution (hosted by ibbaka): The current landscape is stuck in cost-plus pricing for gen-AI tools, thanks to API costs and fierce competition. But here's where it gets interesting: AI agents are pushing us to rethink everything. If we're replacing human labor, why stick to cost-plus or even the more current per-user pricing? The future might be all about outcomes, and therefore a more results oriented pricing model...
🛠️ ML Engineering Reality Check Key takeaway (by Manisha Arora, a Google ML engineer): ML development isn't some exotic creature - it needs the same disciplined approach as traditional software. Version control, modular code, rigorous testing - these fundamentals become even more critical when multiple engineers are tinkering with the models. Key takeway: learn how to use Git, which you also need to know for the coding projects.
📚 Personal Growth: Taking the plunge into full-stack React and NodeJS development so that I understand what the AI coding assistants are creating. I started the University of Helsinki full stack development course and I am building single page application, the modern approach! While AI coding assistants are powerful allies, it's becoming clear: to build sophisticated, production-ready MVPs, I need to speak their language. React keeps popping up as the common denominator in AI-assisted development. Let's see how far I have to in this course until "it clicks". The alternative full stack learning course I'm considering is The Odin Project, also very cool!
The path to AI-powered products requires both strategic thinking and solid technical foundations. Each day brings new clarity to this journey!
🤗 Diving Into Hugging Face: Where Theory Meets Practice
Deep dive into the Transformers chapter in the NLP course! Finally seeing how those abstract ML concepts come to life – watching sentences transform into tokens, then into numerical IDs that models can actually crunch. Those neural network fundamentals from Stanford are clicking into place: the layered architecture, training patterns, and vector transformations all make so much more sense in practice.
The real excitement? Understanding Hugging Face's pipeline is the gateway to customization. Can't wait to start fine-tuning models with specialized content to boost their accuracy. Theory is transforming into practical tools! 🚀
🎯 New Learning Strategy: Alternating Theory & Practice
I'm implementing a new rhythm to maximize learning: alternating between theoretical deep-dives and hands-on tooling/coding days. Today was all about exploring coding tools and pushing boundaries!
🛠️ Tool Exploration Adventures:
🔍 Pattern Recognition: A clear tech stack pattern is emerging in the AI coding tool landscape (Bolt, Lovable, V0):
Time to level up my React game and dive deeper into these backend technologies!
Next up: Exploring the sweet spot between AI-assisted development and maintaining granular control over the codebase. 🚀
🎓 Leveled Up: Stanford's Advanced Learning Algorithms Course is Complete!
Wrapped up my AI foundations journey with Decision Trees – fascinating how they shine with structured data while Neural Networks dominate the unstructured realm of images and audio. The course has equipped me with a solid grasp of supervised learning models, opening doors to hands-on experimentation with TensorFlow and PyTorch.
Next frontier? Diving into Large Language Models and exploring fine-tuning possibilities for custom applications. The theoretical foundation is laid – time to build! 🚀
🧠 Machine Learning: It's All in the Fine-Tuning!
Wrapped up lessons from week two and three of Stanford's Advanced Learning Algorithms course, diving into the art and science of model optimization. Who knew machine learning had so many levers to pull? Learned the delicate dance of managing bias and variance:
High Bias? Try:
High Variance? Consider:
🚀 Caught Sam Altman's fascinating talk on Y Combinator's "How To Build The Future." His take? We're in a golden age for startups, with AI as both catalyst and accelerant. The tech can help companies scale faster and unlock new possibilities – but there's a catch: solid business fundamentals still make or break success. AI is a powerful tool, not a silver bullet.
Every day brings new insights into both the technical depth and practical applications of AI. The learning never stops!
🧠 Diving Deeper into Neural Networks: From Binary to Multiclass Classification
Made significant strides in Stanford's Advanced Learning Algorithms course today! Discovered how ReLU (Rectified Linear Unit) powers the hidden layers of modern neural networks – a game-changer compared to traditional activation functions. The progression from binary classification (distinguishing 0s from 1s) to multiclass recognition (identifying multiple outputs like digits 0-9) using Softmax really illuminated how neural networks scale to handle complex real-world problems.
⚡ Speed Optimization Revelations: learned how the "Adam" optimizer in TensorFlow turbocharges gradient descent, dynamically adjusting step sizes for optimal convergence. Add Convolution Layers to the mix, with their clever partial layer processing, and suddenly machine learning models can be trained in a fraction of the time!
Each piece of the neural network puzzle is falling into place, transforming these theoretical concepts into practical tools. Can't wait to apply these optimizations to real projects!
🧠 Deep Learning Deep Dive
The theory-practice pendulum swung toward theory today as I immersed myself in machine learning fundamentals. Wrapped up Week 1 of Stanford's Advanced Learning Algorithms course, unlocking a deeper understanding of neural networks. Fun coincidence: revisited matrix multiplication – a concept I first encountered in a dusty '90s textbook when I was tinkering with 3D video games. Back then, I couldn't grasp its importance; now it's fascinating to see how this mathematical foundation powers both ML models and gaming graphics!
📚 Learning Evolution:While advancing through Hugging Face's NLP Course Chapter 1, I'm finding myself gravitating toward their hands-on approach. Though the academic foundations are valuable, the real excitement lies in practical implementation. TensorFlow and PyTorch have abstracted away much of the complexity, letting me focus on building rather than reinventing the wheel. My strategy: code first, dive deeper into theory when needed.
💻 Hardware Revolution: NVIDIA just dropped a bombshell with Project DIGITS – a $3,000 AI supercomputer that can handle 200B-parameter model inference! For context, this beast packs 128GB unified memory, dwarfing the new RTX 5090's 32GB. Even more mind-bending: link two together and you're running 400B+ parameter models. The democratization of AI computing is happening faster than anyone expected.
🛠️ AI Development Tools Face-Off & Future Insights
Explored lovable.dev alongside bolt.new today, comparing their approaches to app creation. For my stock trading app, Lovable's AI surprised me by suggesting a modern take on the Bloomberg Terminal layout – sleek and data-rich. While its Tailwind CSS creation looked stunning, I had to compromise for Bootstrap compatibility. Thanks to Cursor's seamless integration with Django, the third iteration of my stock trading app's UX is looking sharp!
🔍 Backend Discoveries: Both lovable.dev and bolt.new use Supabase – an open-source Firebase alternative. The real-time update capability of Supabase caught my attention, as my Django app needs live trade updates. And it has a vector store as well! Now I'm weighing the trade-offs: enhance Django with JavaScript or pivot to Supabase? Supabase also uses PostgreSQL, which would replace my $5/mo Heroku DB instance with a free one - a good deal! I also found some promising .cursorrules samples that might boost AI accuracy in the meantime.
🎯 Future of Marketing: Today's Webflow webinar on 2025 marketing strategies raised fascinating questions about AI's impact on SEO and search. The key takeaway? With AI potentially bypassing traditional website browsing, success will hinge on offering unique, timely perspectives that AI can't replicate. (Fun fact, productpath.ai runs on Webflow.)
🌟 Personal Reflection: Ended the day with a powerful reminder from a wellness podcast with Graham Weaver, Stanford GSB Professor: life's too precious for autopilot mode. As I navigate this AI-powered journey, I'm grateful to be pursuing my passion. It's not just about building apps – it's about creating a story worth telling when we look back.
Next step: Diving deeper into real-time data solutions. The quest for the perfect tech stack continues!
🧠 Deep Diving into AI Fundamentals & Tools!
Made solid progress through Stanford's Advanced Learning Algorithms course today, exploring neural networks from theory to practical TensorFlow implementation. This sparked my curiosity about real-world applications, leading me to read about Hugging Face's pre-trained models.
The Hugging Face ecosystem is fascinating! After watching a Hugging Face getting started guide and then diving into the Hugging Face NLP Course, I'm seeing exciting possibilities for integrating open-source models into my stock trading app.
Speaking of AI tools, Microsoft launched their "new" 365 Copilot Chat today. Strip away the marketing buzz, and it's essentially a fusion of their existing Chat, Agents, and IT Controls. While the repackaging feels a bit overdone, the Agents functionality could be worth watching.
I also continued reading Fundamental of Data Engineering and got to page 147.
Next up: Exploring which Hugging Face model might give my trading app that extra edge. Stay tuned! 📈
Maven's AI Prototyping session with Colin Matthews validated I'm on the right path to rapidly build a UX with AI by utilizing screen capture examples! The post-class discussions also revealed I'm not alone – there's a whole community of builders exploring AI coding, each bringing different technical backgrounds to the table.
Taking Bolt for a spin after class, which combines Stack Blitz's in-browser development capabilities with AI assistance, I managed to level up my stock trading project's UX. The key? Setting clear HTML and Bootstrap CSS constraints, while showing Bolt my efforts so far (with a screen capture), made the Cursor integration seamless.
Next challenge on the horizon: implementing testing. As the complexity grows, I need to protect against potential breaks.
Each day brings new tools and insights in this AI-powered PM journey. If you're on a similar path, I'd love to hear your experiences!
The AI landscape continues to evolve rapidly. Today's headlines feature the Altman-Musk debate about OpenAI becoming a for profit enterprise, which I find important to understand. The Free Press podcast: Sam Altman on His Feud with Elon Musk - and the Battle for AI's Future was informative, however, Sam Altman's measured responses about AI progress and regulation particularly stood out. On the other hand, his advocacy for transparency in AI tuning resonates strongly – users deserve to understand why AI systems make the decisions they do.
My methodical approach with Cursor – tackling one major feature at a time – continues to pay off. The website development is progressing smoothly, and Heroku deployments remain stable. Django's elegant handling of database schema changes has been a particular bright spot in the process.
This journey is teaching me valuable lessons about the current state of AI coding tools: while they're incredibly powerful for specific use cases, understanding their limitations is crucial for effective implementation.
I used Cursor, the AI code editor, for the first time and experimented by adding features to the Heroku sample app with Python Django. For example, I used the "composer" feature to instruct Cursor to create a login. I was impressed that it got most of the changes right including (a) edits to the views.py file (relevant package imports and a new route for a login page) (b) a new html file for the login page (extending properly the base.html file) and (c) updates to urls.py file.
Cursor did make a recommendation to change my Django version in the requirements.txt file, which was not required, so I ignored that suggestion. I even got instructions to rebuild my database schema, which made sense.
Where the changes fell short were in the settings.py file, which had no suggestions, and I needed to make a few alterations, editing the apps, middleware and templates sections to support authentication. I didn't quite realize the errors were related to this until I did some log reviews and got help from Claude, which figured out the problem right away.
I further experimented by editing the nav bar with login/logout, and then building a simple app with form entry. Surprisingly, few issues crept up (though at one point Cursor offered to delete one of my database models :). So you can't just click "ok" ten times and expect everything to be right- double checking is required and my coding lessons are coming in handy.
I also did some digging what CSS framework to adopt for easier app styling, debating between Bootstrap and Tailwind. I ultimately settled on Bootstrap as it's much easier to deploy with Heroku by using the CDN option, and right now I'm prioritizing speed. I can migrate to another CSS framework in the future if it makes sense.
I finished the Harvard CS50W lesson on React to get me up to speed on the React framework. One of the differences in the Harvard CS50W Web Programming with Python and JavaScript class from 2018 to 2022 is the introduction of the React lesson. As I'm interested in programming in React, I decided to watch this section (starting at 52min in Lecture 6 of the newer course).
I also launched today v1 of productpath.ai! 🚀 It's my digital hub for documenting my transformation into an AI-powered product manager. While this version runs on Webflow, the real experiment is already in motion – I'm building my next site entirely with AI as my development partner.
Coming soon: Watch me navigate product management, design, and coding alongside AI to launch a full-stack web application on Heroku. Every success, challenge, and lesson learned will be shared here. The journey from PM to AI-empowered builder is just beginning...
After extensive research and comparisons, I narrowed down my first hosting provider to be Heroku or Digital Ocean. As I'm going for speed and simplicity vs low cost on my first attempt, I decided on Heroku. I considered AWS, GCP, and Azure as well, but from what AI advised me, those will require more expertise (working on that, not a p0 right now!)
I worked through the Getting Started on Heroku with Python tutorial and got the idea how Heroku works. It's even simpler than I thought! The approach to deploying with Git and a YML configuration file is awesome. Makes it so easy! Definitely a confidence booster that operations will be easy for the first project.
I also watched the video session: How Domain-Specific AIAgents (DXA) Will Shape the Industrial World in the Next 10 Years. Even though the talk was super high level, it did make me think how manufacturing could take advantage of GenAI and potentially how the USA could get some of the manufacturing back on shore… thought provoking…
I finished watching Harvard CS50W Web Programming with Python and JavaScript, Lecture 9 (guest presenters from GitHub, Travis CI). This session didn't have hands on practice and the overview is now quite old - a lesson you can skip.
I finished watching Harvard CS50W Web Programming with Python and JavaScript, Lecture 10 (Scalability). This was a listen only session, and I watched it at 2x. Most of the content I was familiar with, such as application and database scaling to handle more user traffic in your application. There was also a discussion on using caches to speed up reads, client and server. I would say this lesson is very much for the beginner, but if you are not familiar with scalability concepts, might be worth the overview.
I finished watching Harvard CS50W Web Programming with Python and JavaScript, Lecture 11 (Security), which was also the last lesson. The concepts on security are very relevant given the empowerment of hackers with AI tools. Even though the lesson covers basic security risks (my favorite the JavaScript Cross-side Scripting Vulnerabilities), these are must have concepts for everyone to understand when generating their own web pages with AI to prevent obvious issues that AI might not consider.
I finished watching Harvard CS50W Web Programming with Python and JavaScript, Lecture 8 (focused on Testing and CI/CD), including coding the examples discussed in class, with GitHub Actions and wrapping up with Docker.
I watched DeepLearning.AI course: Collaborative Writing and Coding with OpenAI Canvas. I found the course quite basic and more of a tutorial / feature overview than tip and tricks to get most out of it.
The course gave me the impression that OpenAI Canvas is still very much a MVP and early in development, and will require user experimentation to get most out of it. The premise is great, as it should make writing a narrative much easier as you highlight the sections to rework which is much more intuitive than doing that all via prompt, and having to be specific each time what section to edit.
I finished watching Harvard CS50W Web Programming with Python and JavaScript, Lecture 7 (focused on Python Django framework), including coding the examples discussed in class.
I continued reading O'Reilly's Fundamentals of Data Engineering: Plan and Build Robust Data Systems by Joe Reis & Matt Housley. I read pages 123-147.
I also watched Y-Combinator video for inspiration how AI can disrupt Vertical SaaS: Vertical AI Agents Could Be 10X Bigger Than SaaS. A pattern is starting to emerge where AI is not just making workers more productive, but in some cases will be actually replacing them...
I finished the Stanford Supervised Machine Learning: Regression and Classification course on Coursera.
I finished watching Harvard CS50W Web Programming with Python and JavaScript, Lecture 6 (focused on Java Script Front Ends), including coding the examples discussed in class.
I continued watching Harvard CS50W Web Programming with Python and JavaScript, Lecture 6 (focused on Java Script Front Ends), including coding the examples discussed in class.
I started watching Harvard CS50W Web Programming with Python and JavaScript, Lecture 6 (focused on Java Script Front Ends), including coding the examples discussed in class.
I also started watching Stanford's CS224N: Natural Language Processing with Deep Learning Lecture 1. Trying to understand more how LLM's work.
I finished watching Harvard CS50W Web Programming with Python and JavaScript, Lecture 5 (focused onJava Script), including coding of all the examples discussed in class.
I worked on week 3 of Stanford's Machine Learning Specialization course and finished the Gradient descent for logistic regression section.
I finished watching Harvard CS50W Web Programming with Python and JavaScript, Lecture 4 (focused on ORMs and APIs with Python), including coding of all the examples discussed in class.
I finished watching Harvard CS50W Web Programming with Python and JavaScript, Lecture 3 (focused on SQL with Python), including coding of all the examples discussed in class.
I finished watching Harvard CS50W Web Programming with Python and JavaScript, Lecture 2 (focused on Flask), including coding of all the examples discussed in class.
I started week 3 of Stanford's Machine Learning Specialization course and finished the Classification with logistic regression sections.
I finished watching Harvard CS50W Web Programming with Python and JavaScript, Lecture 1 (focused onHTML/CSS), including coding of all the examples discussed in class. Given the pacing of the class, I decided to watch all the classes at 1.5x speed.
After some research, I decided that Harvard's CS50W Web Programming with Python and JavaScript course would be the best way to jump into full stack programming, related processes, and product operations. There is an older 2018 course and newer 2022 course. I want to learn Flask, and I really enjoy the Q&A between instructor and students, so I started with 2018 course and will supplement with 2022 lessons later if there is anything different. The course covers a lot of essential technologies and concepts which I want to use to create my own AI applications, such as: Git, HTML, Flask, SQL, API's, JavaScript, Django, Testing, CI/CD, Scalability, and Security.
I started with Lecture 0, to refresh my Git and HTML skills. I watched the lesson on 2x speed, as I found myself knowledgeable enough that a refresh is sufficient. That said, I did follow along with the examples and coded them in Visual Studio, which was a great exercise to get familiar with the motions of HTML/CSS/Python coding with Git and Visual Studio.
I'm also grateful that I took programming classes in college, and I will not have to learn the basics of if statements, for loops, and classes, though understandable that for those less versed in programming, that will be the first step, and there is a Harvard CS50X Introduction to Programming class for that too.
I believe it will be essential to understanding the underlying web page code when building applications with LLMs, so that I know how to fix bugs, modify the code and maintain it.
I finished week 2 of Stanford's Machine Learning Specialization course withAndrew Ng, and came away with a much better understanding how linear regression works with multiple input features, and how to deal with feature scaling, feature engineering and polynomial regression. It was also reassuring to learn that a few simple Python functions exist to do all this work, though understanding how machine learning works under the hood will be useful when interacting with Data Scientists.
I finished watching Harvard CS50 Introduction to Artificial Intelligence with Python, Lecture 0, and decided to pause this course until I have more of a comprehensive overview of programming frameworks that are essential to building the application.
OpenAI shared today that they are working on Chat GPT o3, the next generation reasoning model, which is now undergoing testing. Supposedly GPT o3 is 20% more accurate on a series of programming tasks than the o1 model. A good write up by Francois Chollet in OpenAI o3 Breakthrough High Score on ARC-AGI-Pub article.
I continued with the Stanford Machine Learning Specialization course, reviewing the labs in week 2.
I continued watching Harvard CS50 Introduction to Artificial Intelligence with Python 2020, Lecture 0. The search algorithm discussion is intriguing, and I really like the instructor, Brian Yu, who explains the concepts clearly. Will finish watching, but thinking I will come back to this course later after I've mastered app coding fundamentals.
I continued with the Stanford Machine Learning Specialization course, reviewing the labs in week 2.
I started watching Harvard CS50 Introduction to Artificial Intelligence with Python, Lecture 0.
I continued with the Stanford Machine Learning Specialization course, embarking on week 2, and I completed the Multiple linear regression lessons.
Great news. Grok AI from X is free for all X subscription users.
I continued with the Stanford Machine Learning Specialization course, and I completed the Train the model with gradient descent lessons and week 1.
I worked with Claude to implement scripts for X and Reddit that would capture the top AI and PM news I should be aware of. With the free X developer account, I discovered a significant 100 post per month limitation, so I tried to pull just posts for three influencers for the day, but they had none, so the script returned an error for each, which I have no problem with, except X deducted 3 requests from my 100!!! Jeez, you would think an unsuccessful post retrieval would not eat up my credits remaining.
I had less of an issue with Reddit and successfully pulled top and trending posts in the categories I wanted, now realizing that I have to figure out (probably with AI :) a curation process as there are post trending I don't care for (especially on adult topics!)
I spotted that ChatGPT 4o has been updated with a new June 2024 training dataset knowledge cutoff date! Looking forward to more up to date responses by ChatGPT.
I liked the Stanford Machine Learning Specialization course taught by Andrew Ng, and wanted to work on the python assignments, so I signed up for the course on Coursera.
I completed the Supervised vs Unsupervised Machine Learning and Regression Model lessons.
I also continued with the Generative AI with Large Language Models course and completed the Introduction to LLMs and the generative AI project lifecycle.
I continued to read O'reilly Fundamentals of Data Engineering. Read pages 105-123.
Watched Stanford course "Machine Learning Specialization" on YouTube by Andrew Ng. Fast forwarded through 1-8 quickly as most of those are a repeat from the "AI for Everyone" course.
Attended a Supra meetup in San Francisco with fellow product leaders. We had lively conversations about AI, including building products for AI such as RAG ML Ops pipelines (shout out to Phil Marshall who is working on a product for that) and PM productivity tools.
One discussed was v0.dev to create rapid prototypes, potentially displacing the need for a PRD? I added v0.dev to the list of tools to explore.
I enrolled in Generative AI with Large Language Models course, as I see LLM's being the most relevant in the near term for my projects. I started on week 1 classes.
I learned about multi-headed attention from the pivotal Transformers paper by Google titled "Attention Is All You Need". This is the research paper that kicked off the large scale LLM models we are familiar with.
WatchedYouTube video by Andrej Karpathy explaining how LLM's work in [1hr Talk] Intro to Large Language Models. Really liked theconcept of a LLM OS, and how a LLM can be the kernel of an emerging operating system or maybe like a CPU of a computer, with peripheral attachments such as python interpreter, video, audio modalities, connectivity to the web via browser, or even other LLM's orchestrating a workflow process.
OpenAI released Sora, their video generator to all ChatGPT Plus & Pro customers. Enjoyed learning about the potential of the technology.
To better understand data engineering discipline which is essential for AI, I decided to read the O'Reilly book Fundamentals of Data Engineering: Plan and Build Robust Data Systems by Joe Reis & Matt Housley. I already started the books earlier in the year and made it to page 104 today.
Continued with AI for Everyone course and completed week 4.
Continued with AI for Everyone course and completed week 2 & 3.
Listened half of podcast: @Asianometry & DylanPatel – How the Semiconductor Industry Actually Works
Got up to speed onthe latest ChatGPT Pro release. Unlimited o1 model use for $200/mo. For users who need research-grade intelligence. Release info link.
Also noticed thatChatGPT Plus users now have access to o1. No more "preview".
Based onrecommendations, decided to start taking courses. Enrolled in AI for Everyone - Completed week 1
Listened to YouTube podcast: Ilya Sutskever (OpenAI ChiefScientist) - Building AGI, Alignment, Spies, Microsoft, & Enlightenment
Based on recommendations, started listening to YouTube podcasts:
Mark Zuckerberg - Llama 3,$10B Models, Caesar Augustus, & 1 GW Datacenters and How to use Perplexity
Decided to catalog top influencers in AI to learn from them. Started adding to a list. Began reading posts.
InfluencersI follow: Andrew Ng, Sam Altman, YannLeCun
Started reading research articles on AI Product Management. Found them too theoretical.