• Hack AI
  • Posts
  • 🤖 HackAI Newsletter — January 2026

🤖 HackAI Newsletter — January 2026

Your 2-minute pulse on AI + Texas techPowered by the HackAI community of builders, dreamers, and tinkerers in Austin, TX.

🗓️ Upcoming Events

  • 🎤 HackAI Meetup — This Thursday Jan 8 at the Capital Factory - Our monthly gathering returns from 6–8pm! Expect lightning talks, AI demos, and good vibes with Austin's AI community. Sign up here →
    We’ll see you Thursday on the first floor in the Voltron room!

  • 📊 Data Day Texas AI — January 24-25 Two days of data and AI talks at the AT&T Executive Education Center. Great networking with the Austin data community. Learn more here →

🌍 Big AI Headlines

🚀 OpenAI Drops GPT-5.2 OpenAI's most capable model yet landed December 11 — 400K token context, 30% fewer hallucinations, and three flavors: Instant, Thinking, and Pro. Now the default for all ChatGPT users.

Google Fires Back with Gemini 3 Flash Processing 1 trillion+ tokens daily at just $0.50/million — it's 3x faster than Gemini 2.5 Pro and now powers Search globally.

💰 Nvidia Buys Groq for $20 Billion Nvidia's largest acquisition ever (December 24) brings Groq's lightning-fast inference tech in-house. The AI chip wars continue to rage on.

Also on January 5th, Nvidia announced new physical AI models at CES.

🤖 Meta Snags Manus for $2B+ In a 10-day whirlwind deal, Meta acquired the buzzy agentic AI startup to power smarter assistants across Facebook, Instagram, and WhatsApp.

📜 Trump vs. State AI Laws A December 11 executive order targets state AI regulations (looking at you, Colorado). DOJ will challenge "onerous" laws while 42 state AGs push back demanding AI safety standards.

🇨🇳 China Moves First on "Human-Like AI" Rules Draft regulations require AI chatbots to disclose they're not human every 2 hours and mandate human takeover for suicide discussions.

🤠 Texas AI Spotlight

🚗 Tesla Goes Truly Driverless in Austin As of December 15, Tesla removed all human safety monitors from ~37 robotaxis roaming Austin streets. FSD v13, vision-only AI, zero humans on board. The future is here (and it's in our backyard).

⚖️ Texas AI Law Takes Effect January 1 The Texas Responsible AI Governance Act (TRAIGA) is now live — requiring government AI disclosure, banning manipulation tactics, and creating a regulatory sandbox. Texas is now #2 behind Colorado for comprehensive AI laws.

Goldman Backs 5-Gigawatt AI Power Play South Dallas is getting massive private power campuses for AI data centers — GridFree AI's "South Dallas One" project could be online within 24 months.

🎓 UT Austin Hits 5,000+ GPUs The most AI computing power in academia, period. New Dell servers with 4,000+ NVIDIA Blackwell GPUs plus the Horizon supercomputer coming in 2026.

🏥 First Healthcare AI Settlement AG Ken Paxton reached a landmark settlement with Dallas-based Pieces Technologies over allegedly false AI accuracy claims at Texas hospitals. Precedent set.

🗣️ This Month's AI Debate

Vision-only vs. LiDAR: Is Tesla right to bet it all on cameras?

With Tesla's truly driverless robotaxis now roaming Austin streets using FSD v13's vision-only AI, the autonomous driving world remains split on the best path forward.

🔍 Quick Explainer: The Two Approaches

Vision-Only (Tesla)

LiDAR + Cameras (Waymo, Cruise)

How it works

Cameras capture video, neural networks interpret the scene — like a human driver

Laser pulses create precise 3D point clouds of surroundings, fused with camera data

Cost

~$1,000 in cameras per vehicle

$10,000–$75,000+ in sensors per vehicle

Strengths

Cheap, scalable, reads signs/lights, works on any Tesla

Accurate depth perception, works in darkness/fog/glare

Weaknesses

Struggles with edge lighting, no true depth data

Expensive, complex, harder to scale to consumer cars

Vision-only vs. LiDAR: Is Tesla right to bet it all on cameras?

With Tesla's truly driverless robotaxis now roaming Austin streets using FSD v13's vision-only AI, the autonomous driving world remains split on the best path forward.

💬 What do you think? We'll discuss at the next HackAI Meetup! 👇

👀 Team Vision-Only (Tesla's Approach) — Humans drive with just two eyes — why can't AI? Cameras are cheap, scalable, and capture rich semantic data like text and traffic lights. LiDAR is an expensive crutch that adds complexity without solving the real problem: better neural networks. Tesla's 7 billion miles of real-world data proves the model works.

🔦 Team LiDAR (Waymo, Cruise, etc.) — Cameras fail in rain, fog, glare, and darkness. LiDAR provides precise 3D depth data that doesn't rely on lighting conditions. Waymo's safety record speaks for itself. When lives are on the line, redundancy isn't a crutch — it's responsible engineering. Tesla's 7 reported Austin crashes since June raise questions.

💬 HackAI Community

Join us at the next HackAI meetup for local demos, experiments, and inspiration. Got an AI project, job, or idea to feature?

👉 Reply to this email or message us on Meetup!

Stay curious, Austin 🤖💡

— The HackAI Team, Reid & Bryce