Code Like a Girl
I Didn’t Mean to Start Coding Again
I wasn’t trying to build a tool.
I was trying to understand a gap I couldn’t explain.
Everywhere I looked on Substack, women were writing thoughtful, technical work about AI and software. But on Substack’s Technology Bestseller and Rising lists, lists that shape who gets discovered and taken seriously, they were almost invisible.
That didn’t line up with what I was seeing every day.
So instead of speculating, I did the simplest thing possible.
I counted.
As of December 8:
- 13 women appeared on the Rising Technology list
- 10 women appeared on the Technology Bestsellers list
- Each list contains 100 publications
The numbers made the next step unavoidable.
If I was going to try to create change with Code Like A Girl, I needed a way to know whether anything we did actually mattered.
Noticing the gap once wasn’t enough. I needed to track it.
The developer in me, though out of practice, knew there was only one way forward: I had to build it.
This is the story of me getting my hands dirty and learning to code again, this time with the help of AI.
This Is Where It Started to BreakIn early January, I rolled up my sleeves and started to build. It was the first time I’d built anything more complex than a script since 2008, before I stepped away from hands-on development, had a baby, and returned as a technical leader in 2009.
It was my first time using VS Code.
My first time setting up GitHub.
My first attempt at building a Chrome extension.
It was overwhelming learning the tools and the problem space at the same time. But the CLAG community showed me I could do this. I had been watching women like Karen Spinner, Jenny Ouyang, Karo (Product with Attitude), and Elena building tools and products with AI for months.
It’s a little ironic that my own publication, Code Like A Girl, ended up being the thing that pushed me to start building again.
The One Thing I Wouldn’t Let AI DecideThe initial plan was simple: build a tool that could pull the lists automatically, then let me classify gender manually.
I knew from the start there wasn’t a reliable way to assign gender automatically. I would never trust an algorithm to do it; it would need to be manual. The goal wasn’t to publicly label anyone. Gender assignments would never be shared at the individual level. They existed only to understand the overall distribution of the lists.
When gender couldn’t be confidently determined, entries will be left as unknown. And I knew there could be mistakes. That wasn’t ideal, but it was the tradeoff required to move from anecdote to measurement.
This wasn’t about certainty. It was about getting a signal.
I had also learned from Elena that writing a Product Requirements Document (PRD) before building a tool with AI was critical to getting it right.
I spent the first few hours writing a PRD to make sure all the requirements I wanted were detailed. This was a fantastic exercise to help me think through the problem space clearly.
However, this is where things stopped feeling manageable.
I Tried to Do Everything at OnceThe first thing I had to do was set up a Chrome extension and learn how to use it.
I created a Hello World Chrome extension and then had to ask ChatGPT how to install it, check for errors, and make sure it was running properly.
I attempted to do everything at once: scrape the UI, build a classifier, and figure out how to use and debug Chrome storage. ChatGPT kept generating new files, which I kept installing and running without fully understanding how they fit together.
At some point, I stopped myself and asked:
What am I doing?
I had 9 versions of the files by now, and I didn’t have a clean way to edit them.
I had jumped in too fast.
I paused and assessed what I needed:
- An IDE to edit the files
- Version Control to manage changes safely.
I asked ChatGPT for the best IDE for vibe coding, and it suggested VS Code. So I learned how to set that up.
As part of the setup, it prompted me to connect to GitHub! Perfect. I needed version control. The fact that I had never set up GitHub was just another thing to learn.
I could do this.
I felt like a complete newbie. I had to ask ChatGPT how to add, commit, and push the code. Ok. Let’s be real. I didn’t even know that I needed those three steps to commit the code to the repo. Git didn’t exist when I was a software developer in the late 90s and early 2000s.
The learning curve on this project was massive.
Once VS Code was set up, it suggested I use Copilot. Ok, I thought, let’s give it a try! It was incredibly effective at updating multiple files at once. That is right up until I hit the monthly usage limit. 🤦
Back to ChatGPT.
Why the First Approach Was Never Going to WorkMy first attempt to extract authors and publications from the Technology lists relied on scraping the UI.
I quickly remembered why that’s a terrible idea. The code was fragile, and minor changes broke everything.
A few days into fighting it, I came across a post by Karen Spinner explaining how to use Substack’s undocumented APIs.
That changed the approach.
I opened developer tools, watched the network traffic, and traced the calls populating the lists. (I had to ask ChatGPT how to do all those things…. )
Once I saw the API calls, the problem became much clearer.
I updated the PRD and rewrote the code to use the APIs instead. Within a short time, I could reliably pull items from both lists, store the data, and manually classify gender.
When Progress Still Felt WrongOn paper, this looked like progress.
In practice, I was still struggling.
When I used Copilot, I let it update the code directly without fully understanding what it was changing. At the time, that felt efficient. But when I later asked ChatGPT for modifications, I realized I didn’t actually know what needed to change or why.
That’s when things started to feel wrong.
The Chrome extension had made sense when I was scraping the UI. Once the data came from API calls, though, the extension itself became part of the problem. The architecture no longer matched the task, and I didn’t understand it well enough to bend it back into shape.
Every time I opened the project, instead of feeling clearer, it felt messier and harder to understand.
So I stopped. I regrouped and asked a better question.
What would actually work for me right now?
I spent time talking through different approaches with ChatGPT, weighing the pros and cons, until I landed on an approach that actually made sense to me.
The solution ended up being simpler than I expected:
- Python scripts to pull the data
- SQLite for storage
- Streamlit for a lightweight UI
- A cron job to run weekly
The first two pieces were familiar. I’d built a Python-based stats tool for Medium a couple of years earlier, and I’d taken an SQL course in 2012 and could remember enough of it that I could make sense of what ChatGPT was asking me to do.
That familiarity mattered. It gave me confidence.
And it changed how I built. I asked ChatGPT more questions as we went instead of blindly pushing through. I built it in steps instead of all at once. I reviewed the changes it suggested and made sure I understood them.
Because of the Chrome extension work, I already understood the data schema. I already knew how to pull the data. This time, though, I could actually see what I was building.
After fixing edge cases, resolving formatting inconsistencies, and ironing out a few stubborn bugs, we captured our first complete snapshot.
♦Screenshot from the first run of the tool.WOOT WOOT!!!
We now had a repeatable, transparent way to make patterns visible.
And visible patterns are harder to ignore.
Where Things StandAs of January 29, the tracker captures a new snapshot every week.
Right now, that happens locally with a cron job. My laptop needs to be awake and plugged in before 9 a.m. Eastern. It’s not elegant, but it works.
Eventually, I’ll move it to the cloud. For now, consistency matters more than polish.
Next, I want to make the results easier to see. Maybe a website with static views showing gender distribution and trend data that makes movement visible over time.
I’m also experimenting with what happens when you pair this kind of structured data with analysis tools. I recently started using Stack Contacts by Finn Tropy, a local tool that pulls Substack data into a database and exposes it through an MCP extension so AI tools like Claude can query and analyze it.
The combination of clean data, historical context, and the ability to ask better questions is where this gets interesting.
Why This Matters (and Why It’s Personal)This project was never about building a perfect tool.
It was about proving that patterns don’t have to stay invisible, and that you don’t need permission or a pristine setup to start measuring the things that matter.
I didn’t set out to build software again.
But once I started, I remembered something important: the ability to build hadn’t gone anywhere.
It just needed a reason and the willingness to be a beginner again.
♦I Didn’t Mean to Start Coding Again was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.