<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Node Quest]]></title><description><![CDATA[Node Quest]]></description><link>https://blog.nodequest.net</link><generator>RSS for Node</generator><lastBuildDate>Mon, 11 May 2026 06:27:33 GMT</lastBuildDate><atom:link href="https://blog.nodequest.net/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[My Adventures in Vibe Coding Tools — The Pain of Versioning (Part 2)]]></title><description><![CDATA[I’m a tech person—not the type who grinds out code all day, but not the type who never touches it either. For me, coding is pure interest; I’m not looking to make a full living out of it, just looking to find some joy in the process of exploration. A...]]></description><link>https://blog.nodequest.net/my-adventures-in-vibe-coding-tools-the-pain-of-versioning-part-2</link><guid isPermaLink="true">https://blog.nodequest.net/my-adventures-in-vibe-coding-tools-the-pain-of-versioning-part-2</guid><category><![CDATA[vibe coding]]></category><category><![CDATA[TypeScript]]></category><dc:creator><![CDATA[Zhibin Yang]]></dc:creator><pubDate>Thu, 08 Jan 2026 16:00:00 GMT</pubDate><content:encoded><![CDATA[<blockquote>
<p>I’m a tech person—not the type who grinds out code all day, but not the type who never touches it either. For me, coding is pure interest; I’m not looking to make a full living out of it, just looking to find some joy in the process of exploration. As my handle suggests, my writing style will be a "raw log"—because I believe the exploration of technology is always a journey. If anyone cares about the process, a slightly refined "exploration log" is often more valuable than a polished summary.</p>
</blockquote>
<h2 id="heading-im-hurtingso-whats-the-move">I’m Hurting—So What’s the Move?</h2>
<p>After getting burned by versioning issues in that interview and the struggles I mentioned before, I knew I had to change something. Having built a few small AI apps, I knew the solution: I needed to feed the AI as much information as it was missing—specifically, the latest documentation and code samples. But how do I present that to the AI? My first thought was <strong>MCP (Model Context Protocol)</strong>. If the AI knows that it <em>doesn't</em> know something during the Vibe Coding process, it should be able to go out, find the relevant new features or code samples, and then code based on those examples. MCP is essentially the interface that allows the AI to call external tools during runtime.</p>
<p>That led to two questions:</p>
<ol>
<li><p>Where do I find these features and code samples?</p>
</li>
<li><p>Once I have this info, how do I serve it via MCP?</p>
</li>
</ol>
<p>I consulted Gemini, and here is the verdict: currently, there isn't a one-stop shop, but there are ways to do it if you split it into two steps. For the first question—latest features and samples—they usually exist in the library’s docs or auto-generated documentation. But <em>scraping</em> them is the real hurdle. Let’s talk about that.</p>
<h3 id="heading-scraping-the-documentation">Scraping the Documentation</h3>
<p>Gemini suggested two main paths: SaaS-based scrapers or local scraping libraries, both of which can convert pages into Markdown. I tested two:</p>
<ol>
<li><strong>The SaaS Route:</strong> <a target="_blank" href="https://www.firecrawl.dev/"><strong>Firecrawl</strong></a></li>
</ol>
<ul>
<li><p>The free tier exists, but it’s limited to about 500 pages a month.</p>
</li>
<li><p>It supports real-time AI crawling, but that’s an extra cost.</p>
</li>
<li><p>It supports "MCP + Real-time AI Crawl," but doesn't seem to support "MCP + Pre-scraped content" out of the box.</p>
</li>
<li><p>Firecrawl is open-source and can be self-hosted via Docker, but the build process for Docker Compose is painfully slow and full of errors (I’m on an aging Mac that can't be updated anymore—I'm not going to torture myself). I gave up on this after one try.</p>
</li>
</ul>
<ol start="2">
<li><strong>The Local Route:</strong> <a target="_blank" href="https://www.google.com/search?q=https://crawl4ai.com/user_guides/basic_usage/"><strong>Crawl4ai</strong></a></li>
</ol>
<ul>
<li><p>This is a Python library that supports various crawling methods and filtering. It works great and, most importantly, it’s free. We’ll look at the results in a bit.</p>
</li>
<li><p>It’s very easy to use: I just had the AI write a call script, gave it a root directory for the docs and a prefix filter, and I was good to go.</p>
</li>
</ul>
<p>AI told me that Cursor's built-in scraping is likely powered by Firecrawl, so the SaaS version is probably top-tier. I tried using it to crawl some LangChain docs, and the quality was solid. But the credits vanish fast—without paying, it’s basically a non-starter for large docs.</p>
<p>When I moved to <strong>Crawl4ai</strong>, I had to decide <em>where</em> to run it. Documentation is usually hosted globally, and scraping requires a lot of back-and-forth communication. Plus, network latency (and the Great Firewall if you're in certain regions) can be a pain. After some thought, I found the perfect spot: <strong>Google Colab</strong>.</p>
<p>Some people might ask why I’m so obsessed with Google products. The truth is, Google provides so many low-cost (often free) and open solutions for developers to play with. At its core, Google Colab is just an online Jupyter Notebook running on a temporary VM. It’s meant for ML/AI and data science, but you can absolutely use it as a standard, cloud-based Jupyter environment. A pro-tip: Colab gives you free GPU credits, so you can even use it for light LLM fine-tuning or small-scale BERT pre-training. Here, by mounting Google Drive to the Jupyter Notebook, I could have Colab scrape the pages for me. By setting a reasonable <code>sleep</code> interval, I scraped thousands of pages without a single IP ban.</p>
<p>Here is the Jupyter script I used to scrape the Kubernetes JavaScript SDK documentation (I’m hosting a mirror of it on a <a target="_blank" href="http://github.io">github.io</a> page):</p>
<pre><code class="lang-python"><span class="hljs-comment"># Install crawl4ai</span>
!pip install crawl4ai 
<span class="hljs-comment"># Initialize crawl4ai - note: you might need to restart the Colab runtime after first run</span>
!crawl4ai-setup 

<span class="hljs-keyword">from</span> google.colab <span class="hljs-keyword">import</span> drive 
drive.mount(<span class="hljs-string">'/content/drive'</span>) <span class="hljs-comment"># Mount Google Drive</span>

<span class="hljs-keyword">import</span> asyncio
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> re
<span class="hljs-keyword">from</span> urllib.parse <span class="hljs-keyword">import</span> urlparse
<span class="hljs-keyword">from</span> crawl4ai <span class="hljs-keyword">import</span> AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode

<span class="hljs-comment"># ==================== Config ====================</span>
<span class="hljs-comment"># 1. Target &amp; Scope</span>
BASE_URL = <span class="hljs-string">"https://zhibinyang.github.io/kubernetes-client-node-docs-v1.4-unofficial/modules/index.html"</span>
PREFIX = <span class="hljs-string">"https://zhibinyang.github.io/kubernetes-client-node-docs-v1.4-unofficial/"</span>

<span class="hljs-comment"># 2. Extension Filter (Set to None or [] to crawl all pages under the prefix)</span>
ALLOWED_EXTENSIONS = [<span class="hljs-string">'.html'</span>, <span class="hljs-string">'.htm'</span>]

<span class="hljs-comment"># 3. Storage Settings</span>
OUTPUT_DIR = <span class="hljs-string">"/content/drive/MyDrive/Docs/Kubernetes-Node"</span>
TRACKER_FILE = os.path.join(OUTPUT_DIR, <span class="hljs-string">"crawled_urls.txt"</span>)

<span class="hljs-comment"># 4. Crawling Preferences</span>
SLEEP_TIME = <span class="hljs-number">1.0</span>  <span class="hljs-comment"># Seconds between requests</span>
<span class="hljs-comment"># ===============================================</span>

os.makedirs(OUTPUT_DIR, exist_ok=<span class="hljs-literal">True</span>)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">clean_url</span>(<span class="hljs-params">url</span>):</span>
    <span class="hljs-string">"""Remove anchors and query params to ensure uniqueness"""</span>
    <span class="hljs-keyword">return</span> url.split(<span class="hljs-string">'#'</span>)[<span class="hljs-number">0</span>].split(<span class="hljs-string">'?'</span>)[<span class="hljs-number">0</span>].rstrip(<span class="hljs-string">'/'</span>)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">should_crawl</span>(<span class="hljs-params">url, prefix, processed_set, allowed_exts</span>):</span>
    <span class="hljs-string">"""Check if a URL meets the crawling criteria"""</span>
    cleaned = clean_url(url)

    <span class="hljs-comment"># Basic check: Not processed and matches prefix</span>
    <span class="hljs-keyword">if</span> cleaned <span class="hljs-keyword">in</span> processed_set <span class="hljs-keyword">or</span> <span class="hljs-keyword">not</span> cleaned.startswith(prefix):
        <span class="hljs-keyword">return</span> <span class="hljs-literal">False</span>

    <span class="hljs-comment"># Extension check: Must match if a list is provided</span>
    <span class="hljs-keyword">if</span> allowed_exts:
        path = urlparse(cleaned).path
        <span class="hljs-comment"># Handle index cases: paths ending in / usually correspond to index.html</span>
        <span class="hljs-keyword">if</span> path.endswith(<span class="hljs-string">'/'</span>) <span class="hljs-keyword">or</span> <span class="hljs-keyword">not</span> os.path.basename(path):
            <span class="hljs-keyword">return</span> <span class="hljs-literal">True</span>
        <span class="hljs-keyword">return</span> any(path.lower().endswith(ext.lower()) <span class="hljs-keyword">for</span> ext <span class="hljs-keyword">in</span> allowed_exts)

    <span class="hljs-keyword">return</span> <span class="hljs-literal">True</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_file_name</span>(<span class="hljs-params">url</span>):</span>
    <span class="hljs-string">"""Naming logic: Use the last two parts of the path"""</span>
    path = urlparse(clean_url(url)).path.strip(<span class="hljs-string">'/'</span>)
    <span class="hljs-comment"># Strip known extensions to save as .md</span>
    <span class="hljs-keyword">if</span> ALLOWED_EXTENSIONS:
        <span class="hljs-keyword">for</span> ext <span class="hljs-keyword">in</span> ALLOWED_EXTENSIONS:
            path = re.sub(re.escape(ext) + <span class="hljs-string">r'$'</span>, <span class="hljs-string">''</span>, path, flags=re.IGNORECASE)

    parts = [p <span class="hljs-keyword">for</span> p <span class="hljs-keyword">in</span> path.split(<span class="hljs-string">'/'</span>) <span class="hljs-keyword">if</span> p]
    <span class="hljs-keyword">if</span> len(parts) &gt;= <span class="hljs-number">2</span>:
        name = <span class="hljs-string">f"<span class="hljs-subst">{parts[<span class="hljs-number">-2</span>]}</span>-<span class="hljs-subst">{parts[<span class="hljs-number">-1</span>]}</span>"</span>
    <span class="hljs-keyword">elif</span> len(parts) == <span class="hljs-number">1</span>:
        name = parts[<span class="hljs-number">0</span>]
    <span class="hljs-keyword">else</span>:
        name = <span class="hljs-string">"index"</span>

    <span class="hljs-keyword">return</span> re.sub(<span class="hljs-string">r'[^\w\-]'</span>, <span class="hljs-string">'_'</span>, name) + <span class="hljs-string">".md"</span>

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">universal_crawler</span>():</span>
    <span class="hljs-comment"># 1. Load progress</span>
    processed_urls = set()
    <span class="hljs-keyword">if</span> os.path.exists(TRACKER_FILE):
        <span class="hljs-keyword">with</span> open(TRACKER_FILE, <span class="hljs-string">'r'</span>) <span class="hljs-keyword">as</span> f:
            processed_urls = set(line.strip() <span class="hljs-keyword">for</span> line <span class="hljs-keyword">in</span> f <span class="hljs-keyword">if</span> line.strip())

    <span class="hljs-comment"># 2. Crawler Config</span>
    browser_config = BrowserConfig(headless=<span class="hljs-literal">True</span>)
    run_config = CrawlerRunConfig(
        cache_mode=CacheMode.BYPASS,
        word_count_threshold=<span class="hljs-number">100</span>,
        remove_overlay_elements=<span class="hljs-literal">True</span>
    )

    <span class="hljs-keyword">async</span> <span class="hljs-keyword">with</span> AsyncWebCrawler(config=browser_config) <span class="hljs-keyword">as</span> crawler:
        queue = [BASE_URL]

        <span class="hljs-keyword">while</span> queue:
            current_raw_url = queue.pop(<span class="hljs-number">0</span>)
            current_url = clean_url(current_raw_url)

            <span class="hljs-comment"># Re-check (prevent duplicates in queue)</span>
            <span class="hljs-keyword">if</span> current_url <span class="hljs-keyword">in</span> processed_urls:
                <span class="hljs-keyword">continue</span>

            print(<span class="hljs-string">f"🚀 Processing: <span class="hljs-subst">{current_url}</span>"</span>)

            result = <span class="hljs-keyword">await</span> crawler.arun(url=current_url, config=run_config)

            <span class="hljs-keyword">if</span> result.success:
                <span class="hljs-comment"># A. Save file</span>
                file_name = get_file_name(current_url)
                file_path = os.path.join(OUTPUT_DIR, file_name)
                <span class="hljs-keyword">with</span> open(file_path, <span class="hljs-string">"w"</span>, encoding=<span class="hljs-string">"utf-8"</span>) <span class="hljs-keyword">as</span> f:
                    f.write(result.markdown)

                <span class="hljs-comment"># B. Update tracker</span>
                processed_urls.add(current_url)
                <span class="hljs-keyword">with</span> open(TRACKER_FILE, <span class="hljs-string">"a"</span>) <span class="hljs-keyword">as</span> f:
                    f.write(current_url + <span class="hljs-string">"\n"</span>)

                <span class="hljs-comment"># C. Find new links</span>
                <span class="hljs-keyword">for</span> link <span class="hljs-keyword">in</span> result.links.get(<span class="hljs-string">"internal"</span>, []):
                    link_url = clean_url(link[<span class="hljs-string">'href'</span>])
                    <span class="hljs-keyword">if</span> should_crawl(link_url, PREFIX, processed_urls, ALLOWED_EXTENSIONS):
                        queue.append(link_url)
            <span class="hljs-keyword">else</span>:
                print(<span class="hljs-string">f"❌ Failed: <span class="hljs-subst">{current_url}</span> - <span class="hljs-subst">{result.error_message}</span>"</span>)

            <span class="hljs-keyword">await</span> asyncio.sleep(SLEEP_TIME)

<span class="hljs-comment"># Execute</span>
<span class="hljs-keyword">await</span> universal_crawler()
</code></pre>
<p>One limitation of Colab is that it requires your browser to stay open and your connection to be active to keep the VM alive. For a task like this, keeping it open all day isn't a huge deal. But if your network drops for more than 30 minutes, the resources are released and your script stops. My script accounts for this by tracking progress, so you can pick up where you left off.</p>
<p>The Kubernetes docs were my "Phase 2" scrape. "Phase 1" was the entire documentation for LangChain.js and LangGraph.js—about 400 pages. While the Kubernetes script was running, I was already testing the search effectiveness of the LangChain data. I didn’t even wait for the K8s scrape to finish; I was too excited to see how it worked with MCP.</p>
<p>Here is a look at one of the scraped files:</p>
<pre><code class="lang-md"><span class="hljs-section"># Tools</span>
Tools extend what agents can do—letting them fetch real-time data, execute code...
<span class="hljs-section">## Create tools</span>
<span class="hljs-section">### Basic tool definition</span>
The simplest way to create a tool is by importing the <span class="hljs-code">`tool`</span> function...
</code></pre>
<h3 id="heading-now-that-ive-scraped-it-how-do-i-use-it">Now that I've scraped it, how do I use it?</h3>
<p>I spent a lot of effort getting this data, so I needed an MCP service to serve it up immediately. Following the principle of <strong>"Don't reinvent the wheel,"</strong> I looked for the simplest implementation. That's when I turned back to a piece of software I've been using for a while: <strong>Obsidian</strong>.</p>
<p>Obsidian is a free Markdown note-taking app with a massive plugin ecosystem. I previously used a plugin called "Copilot" which was great—it uses a built-in vector database combined with Embedding Model APIs (like OpenAI) to provide a full RAG (Retrieval-Augmented Generation) experience for your local notes. Since it's a note-taking app, Markdown rendering is built-in.</p>
<p>If there were a plugin that could link a vector database and an Embedding model while providing an <strong>MCP interface</strong>, I’d have the best of both worlds: managing docs in Obsidian (reviewing and editing them) while letting my AI IDE query them. I found a plugin called <strong>MCP Server</strong> (not in the official store; you have to clone it from GitHub and build it manually).</p>
<blockquote>
<p>GitHub: <code>Minhao-Zhang/obsidian-mcp-server</code></p>
</blockquote>
<p>This plugin has all the hallmarks of a proper RAG system: configurable Embedding models, chunking, overlap settings, similarity thresholds, and top-k retrieval. It seemed perfect. However, I later realized a major flaw: <strong>it doesn't support incremental updates.</strong> If you add one new document, you have to rebuild the <em>entire</em> vector database. If you have a lot of docs, your API costs are going to climb fast.</p>
<p>After getting it running, I tested it using the official <strong>MCP Inspector</strong>:</p>
<pre><code class="lang-bash">npx @modelcontextprotocol/inspector
</code></pre>
<p>Using the SSE mode, I plugged in the SSE address from the Obsidian plugin and connected. I could then list the tools and use <code>simple_vector_search</code> to query my docs.</p>
<p><img src alt class="image--center mx-auto" /></p>
<h3 id="heading-does-it-actually-work">Does it actually work?</h3>
<p>We scraped the docs with Crawl4ai and built the vector DB in Obsidian. We were dying to see how this improved the Vibe Coding experience. But the Inspector tests revealed a problem immediately.</p>
<p>Looking at the LangChain Markdown example from earlier, most documentation pages start with a huge introduction/navigation section and end with a long list of related links. These sections are packed with keywords. If you use standard chunking (say, 1000 or 2000 characters), these headers and footers become their own chunks.</p>
<p>From a vector DB perspective, these "noise" chunks actually have a higher semantic density than the actual code blocks. Plus, the intros often mention features from <em>other</em> pages. As a result, when you search via the MCP interface, the top results—sorted by similarity—are often just the intro/navigation chunks from five different pages, rather than the code implementation you actually need.</p>
<p>You can see it yourself if you look at the raw documentation pages: <a target="_blank" href="https://docs.langchain.com/oss/javascript/langchain/models"><code>https://docs.langchain.com/oss/javascript/langchain/models</code></a></p>
<p><img src alt class="image--center mx-auto" /></p>
<p>From this angle, after all this work to get the pipeline running, the results were... underwhelming. So, is there anything we can do to save our Vibe Coding setup?</p>
<p>There is, though whether it works is a story for next time. Stay tuned.</p>
]]></content:encoded></item><item><title><![CDATA[My Adventures in Vibe Coding Tools — The Pain of Versioning (Part 1)]]></title><description><![CDATA[I’m a tech person—not the type who grinds out code all day, but not the type who never touches it either. For me, coding is pure interest; I’m not looking to make a full living out of it, just looking to find some joy in the process of exploration. A...]]></description><link>https://blog.nodequest.net/my-adventures-in-vibe-coding-tools-the-pain-of-versioning-part-1</link><guid isPermaLink="true">https://blog.nodequest.net/my-adventures-in-vibe-coding-tools-the-pain-of-versioning-part-1</guid><category><![CDATA[vibe coding]]></category><dc:creator><![CDATA[Zhibin Yang]]></dc:creator><pubDate>Tue, 06 Jan 2026 16:00:00 GMT</pubDate><content:encoded><![CDATA[<blockquote>
<p>I’m a tech person—not the type who grinds out code all day, but not the type who never touches it either. For me, coding is pure interest; I’m not looking to make a full living out of it, just looking to find some joy in the process of exploration. As my handle suggests, my writing style will be a "raw log"—because I believe the exploration of technology is always a journey. If anyone cares about the process, a slightly refined "exploration log" is often more valuable than a polished summary.</p>
</blockquote>
<h2 id="heading-how-i-started-vibe-coding">How I Started "Vibe Coding"</h2>
<p>My journey into Vibe Coding started with an interview two months ago—or rather, not the preparation for it, but the post-interview reflection. It was a live test where I was told in advance I’d have one hour to build an Agent. I mistakenly assumed it would be a low-code task (since it was a Solutions Architect role), so I spent a day brushing up on Coze. When the interview started and they said it was a "full-code" test, I was totally blindsided. Since I was already there, I tried to wing it using my old code snippets and ChatGPT/Gemini to force out a LangGraph.js implementation. The problem? I hadn't touched it in a month, and LangGraph.js had jumped from v0.3 to v1.0. Other dependencies had changed too, and most of my snippets were broken.</p>
<p>In a desperate "just make it work" attempt, I started compromising. I ditched the PDF parsing (dependency issues), gave up on running the Graph with State (I was in such a rush I missed a parameter and couldn't debug it), and just used LangChain.js with basic context (which forced me to add an intent classifier at the start). By the time I managed to paste the PDF text in manually and get the intent branches working, my hour was up—and I hadn't even written the core logic yet.</p>
<p>The interviewer told me that with my approach, unless I had the questions in advance, it was almost impossible to finish in an hour. But, if I had used something like Cursor with a proper setup, an hour is usually plenty.</p>
<p>That moment felt exactly like 2015, seeing how smooth the iPhone was for the first time. Suddenly, the Nokia in my hand just wasn't "it" anymore.</p>
<h2 id="heading-what-tools-do-i-use-for-vibe-coding">What Tools Do I Use for Vibe Coding?</h2>
<p>Looking at the landscape today, Vibe Coding tools are everywhere, from IDEs to CLIs. I checked out some CLI tools and tried the Gemini CLI, but it felt a bit too "high-end" for my needs. I think the key point is: when the pros are Vibe Coding, they are mostly designing and reviewing code. For me, besides building a POC, I still need to learn. Features like "Jump to Definition" and seeing code diffs in an IDE are essential for me, so I decided to stick with an IDE.</p>
<p>On the other hand, to be honest, I tried Cursor four or five months ago. But as a JetBrains Toolbox user of over 10 years, I just couldn't deal with the UI (even with VS Code, I only use it for config files or as a temporary clipboard). Plus, I have this obsession with always having the root directory file tree expanded in the top left. The first time I opened Cursor and saw that empty, "where-do-I-even-start" interface, I gave up in less than an hour. When I reinstalled it two months ago, the new version felt much cleaner and at least acceptable, so I won't nag about that anymore.</p>
<p>I tried building some POCs with Cursor. As a free user, it was great until the credits ran out. The "stamina" wasn't quite there. Back then, I tried switching to Gemini 2.5 Flash—it’s much cheaper and almost impossible to run out of credits, but the performance definitely took a hit.</p>
<p>I also tried Google's Firebase Studio. It’s a web-based IDE with a VS Code-style interface. The "everything-in-browser" approach is fresh, but considering the usual "connectivity issues" with Google products here, even with tools, the experience was a bit flaky and prone to disconnects.</p>
<p>Lately, I've been experimenting with JetBrains + Google Code Assist. Even though Code Assist is just a plugin, it has Vibe Coding capabilities. However, since JetBrains isn't as "open" as others, there are many functional limitations. For instance, you can't just click to run a script; you have to manually copy the command it gives you into the terminal. Support for peripheral tools is almost non-existent. I started with WebStorm 2025.1.1 (released mid-2025), which only supported Prompt Templates and MCP. And the MCP didn't even support HTTP requests; I found all sorts of issues during stdio testing. You couldn't even pick the model—it seemed to be locked to Gemini 2.5.</p>
<p>Since MCP was acting up, I tried upgrading WebStorm to the latest 2025.3.1. Then, a "miracle" happened: my Gemini connection died. After researching with AI for a while, I concluded that the 2025.3 major release had a massive change—JavaScript now has its own Runtime. As a result, my previous global proxies or environment variables stopped working. I couldn't find a fix with my current proxy setup, and the AI suggested moving to a TUN-mode proxy, so I put that aside for a bit.</p>
<p>I looked at other candidates and spent an hour on the Gemini CLI before giving up again. I went back to Cursor, but my quota vanished instantly. Finally, I saw Google's new <strong>Antigravity</strong>. This IDE looks very ambitious. I wanted to try it, but got stuck at the login screen. A quick search revealed—you guessed it—the same proxy issues. At that point, I realized I really had to spend some time figuring out this TUN thing. So, I spent the next day on that.</p>
<p>Today, once I got the TUN setup working, I finally logged into Antigravity and could actually start experimenting.</p>
<p>Now, let's get back to our main topic: the nightmare of versioning.</p>
<h2 id="heading-how-painful-is-versioning-in-vibe-coding">How Painful is Versioning in Vibe Coding?</h2>
<p>If the libraries you use were designed with forward compatibility in mind, and the ecosystem maintained perfect dependency harmony, you might never feel this pain.</p>
<p>While I was messing with Google Code Assist over the past few days, a core theme emerged. I wanted to untie the knot I encountered during that interview: modern software libraries move too fast. Many updates happen <em>after</em> the training cutoff for LLMs. New changes are often incompatible with old ones, meaning the AI gives you old code that breaks when you install the <code>latest</code> version. But if you don't use <code>latest</code>, the AI often suggests random versions and gives you code that simply won't run.</p>
<p>Here are a few glaring examples:</p>
<h3 id="heading-openai-api-model-calls-in-langchainjs">OpenAI API Model Calls in LangChain.js</h3>
<p>Letting an AI write LangChain/LangGraph code is a giant trap. From what I’ve seen in LangChain.js and LangGraph.js, the syntax changes between v0.1, v0.2, v0.3, and v1.0 are massive. AI models often fail to match the correct version in their answers, leading to parameter errors or deprecated syntax.</p>
<p>For example, in LangChain.js v0.3, calling an OpenAI-compatible model (like Doubao API) like this was fine:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> model = <span class="hljs-keyword">new</span> ChatOpenAI({
    <span class="hljs-attr">apiKey</span>: process.env.OPENAI_API_KEY,
    <span class="hljs-attr">configuration</span>: {
        <span class="hljs-attr">baseURL</span>: process.env.OPENAI_BASE_URL,
    },
    <span class="hljs-attr">model</span>: <span class="hljs-string">"doubao-seed-1-6-flash-250828"</span>, 
    <span class="hljs-attr">temperature</span>: <span class="hljs-number">0.7</span>,
});
</code></pre>
<p>But in v1.0, this throws an error:</p>
<pre><code class="lang-plaintext">'Incorrect API key provided: ************************. You can find your API key at https://platform.openai.com/account/api-keys.'
</code></pre>
<p>You guessed it—v1.0 reserved the <code>model</code> parameter exclusively for OpenAI and uses it as the "flag" to identify a native OpenAI call. Other compatible models must now use the <code>modelName</code> parameter.</p>
<p>I've even seen AI suggest this for the same code:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> model = <span class="hljs-keyword">new</span> ChatOpenAI({
    <span class="hljs-attr">apiKey</span>: process.env.OPENAI_API_KEY,
    <span class="hljs-attr">baseURL</span>: process.env.OPENAI_BASE_URL,
    <span class="hljs-attr">model</span>: <span class="hljs-string">"doubao-seed-1-6-flash-250828"</span>, 
    <span class="hljs-attr">temperature</span>: <span class="hljs-number">0.7</span>,
});
</code></pre>
<p>In reality, LangChain.js never had this syntax. The AI likely "hallucinated" this from the LangChain Python version, which uses a <code>base_url</code> parameter in that position.</p>
<h3 id="heading-kubernetes-javascript-sdk-version-changes">Kubernetes JavaScript SDK Version Changes</h3>
<p>Logic dictates that K8s is a massive project, but maybe not enough people use the K8s + JS/TS combo, which leads to issues.</p>
<p>I ran into this with WebStorm + Google Code Assist. When I asked the AI to generate code for K8s resources, it always gave me something like this:</p>
<pre><code class="lang-ts"><span class="hljs-keyword">await</span> coreV1Api.readNamespacedPod(<span class="hljs-string">"my-pod"</span>, <span class="hljs-string">"default"</span>);
</code></pre>
<p>But if you check the signature for <code>readNamespacedPod</code> in the actual library, the first argument is now a <code>params</code> object containing <code>name</code>, <code>namespace</code>, etc. It should be:</p>
<pre><code class="lang-ts"><span class="hljs-keyword">await</span> coreV1Api.readNamespacedPod({ 
    name: <span class="hljs-string">"my-pod"</span>, 
    <span class="hljs-keyword">namespace</span>: <span class="hljs-string">"default"</span> 
});
</code></pre>
<p>This was a breaking change across the entire SDK. In the current v1.4, all resource calls have shifted to this new format.</p>
<p>Once you fall into this trap during Vibe Coding, it’s a nightmare. You chat about something else, ask it to refactor some code, and it "helpfully" reverts every call in the file back to the old format. You tell it not to, you show it the correct way, but it forgets after a few rounds. Sigh...</p>
<h2 id="heading-what-do-i-do-now">What Do I Do Now?</h2>
<p>I've been writing for too long today. I'll pick this up again tomorrow.</p>
]]></content:encoded></item></channel></rss>