WRDashboard

Fork Me on Gitlab

Articles

KW Linux User Group(KWLUG)

2026-05: Incident Response, LibreTime

Thomas Busch discusses how to respond to security incidents. Bob Jonkman discusses how he uses LibreTime to manage the Radio Waterloo radio station. See kwlug.org/node/1463 for additional information, slides and other auxiliary materials. Note that this audio has had silences clipped.


Code Like a Girl

Readability vs. Performance: What Should You Optimize First?

Engineering Beyond Code | Part 5The honest answer is both matter, but not at the same time and not equally.♦Photo by Justin Morgan on Unsplash

Should performance be important? Absolutely yes.
Should it be your starting point? Not really.

I recently ran a poll on LinkedIn where 71% of engineers said they prioritize performance over readability. That instinct isn’t surprising. Performance feels tangible. Faster systems, lower latency, better benchmarks — it’s measurable, visible, and often celebrated.

But here’s the catch: most engineering work doesn’t fail because the code was too slow. It fails because the code was too hard to understand.

Early in your career, this distinction is easy to miss.

You’re drawn to writing clever code. Optimized logic. Compact solutions. It feels like real engineering. But over time, you start realizing that code is not written for machines — it’s written for people who have to read, debug, extend, and trust it.

That’s where readability quietly becomes a force multiplier.

Readable code reduces the time spent deciphering intent. It makes debugging less of a guessing game and more of a structured process. It allows teams to collaborate without constantly reinterpreting each other’s work. And perhaps most importantly, it ages well. Systems evolve, teams change, and requirements shift—but readable code adapts without breaking under its own complexity.

It also has practical advantages that are easy to underestimate. Clean, understandable code lowers onboarding time for new engineers. It reduces the chances of introducing subtle bugs. It makes testing more straightforward. Over time, this directly translates into lower maintenance costs and less technical debt.

That said, performance is not optional — it’s contextual.

There are systems where performance is the product. Real-time gaming, high-frequency trading, large-scale data processing — these domains demand precision and efficiency. In such cases, optimizing code is not premature; it’s essential.

Performance can also unlock real business value. Faster systems can handle more users, reduce infrastructure costs, and provide better user experiences. In competitive environments, these gains matter.

But here’s the nuance most engineers miss: performance should be intentional, not instinctive.

You don’t start with optimization. You start with clarity. You build something correct, understandable, and measurable. Then you identify bottlenecks. Then you optimize—with purpose.

This is what Donald Knuth was pointing to when he said, “Premature optimization is the root of all evil.”
Not that optimization is bad—but that optimizing without context leads to unnecessary complexity with little payoff.

The real skill is not choosing readability over performance or vice versa. It’s knowing when each matters more.

Early in your career, bias toward readability. It will teach you how systems work, how teams collaborate, and how to write code that survives beyond your immediate use case. As you grow, you’ll develop the judgment to selectively optimize where it actually counts.

Because in the end, performance might give you a thrill — the satisfaction of efficiency, speed, and precision.

But readability gives you something far more enduring: stability, clarity, and trust in the systems you build.

And in most real-world systems, that’s what scales.

Readability vs. Performance: What Should You Optimize First? was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


Code Like a Girl

How I Use AI as a Product Data Scientist (A Year In)

The tools, the trade-offs, and the parts of my work I still do myself.♦

I didn’t notice the shift while it was happening.

It only became clear when I looked back at how I worked a year ago compared to now.

The tools changed, but more than that, the nature of what I spend my time on changed, in a way that’s hard to reverse once you see it.

What my work used to look like

A year ago, a large part of my week was operational.

Cleaning data, writing SQL and Python, producing ad-hoc analyses, building dashboards that stakeholders would inevitably come back to and ask me to update.

A lot of repetition, much of it necessary to make sure the final output actually fit their use case.

That layer has started to shift. For me, the change wasn’t from “using AI as a helper” to “using AI more”, it was from using AI as a tool to building agentic skills, MCP servers, LLM applications, as part of my job.

What I actually use, and for what

The fundamentals of my role haven’t changed, but the shape of it has. I’ve gone from a builder of dashboards to a curator of domain context and a builder of AI systems.

  • Codex and Claude Code for generating code, refactoring, and code review. Most of the time it’s faster than writing it myself. Sometimes it’s not, I’ll come back to that. 👀
  • Claude / ChatGPT for first-pass analyses. I feed in a previous analysis and ask it to draft a new one for a similar problem. I still rewrite most of it, but starting from a draft is much easier than starting from a blank page.
  • Agent and skill building for the parts of my work that repeat. Here I’m not the writer of the analysis, I’m the conductor, making sure the AI’s logic aligns with business goals.
The shift I didn’t expect

The bigger change wasn’t speed. It was scope.

A few weeks ago, a UX researcher reached out asking me to help understand a product behavior pattern. The analysis involved building a logistic regression to understand what drives users to return (for a product I don’t own).

A year ago, that kind of cross-functional ask would have required real setup: scoping the work, routing it to a data scientist to do the analysis, even for a proof of concept.

Now, stepping into an adjacent problem is much easier, because execution isn’t the limiting factor anymore. Judgement is.

Our team is also building an LLM-powered internal tool right now, even though none of us are full-stack web developers. The gap between “what I know” and “what I can build” has narrowed, not because we suddenly became experts, but because the execution layer is no longer where the time goes.

And this isn’t unique to data roles. I see engineers building tools outside their main stack, designers prototyping with code, PMs running their own analyses.

The shape of what someone can do at work is changing across the entire workforce.

Where my time actually goes now

Less coding. More everything else.

More time talking to PMs and stakeholders to understand what they need to move faster.

More time on the deep analyses where the pattern looks fine on the surface and only gets interesting when you push on the assumption underneath.

More time deciding what’s even worth building in the first place.

AI is fast at implementation, but it’s not yet reliable at knowing what’s meaningful to pursue. It tends to over-engineer when the context isn’t constrained, so part of my job is now framing the problem tightly enough that the output stays grounded. Strong references in, useful output out.

What I won’t outsource

Even with all this, there are parts of my work I still do myself.

I talk to PMs and stakeholders directly to understand what they actually need before any code gets written. I sanity-check data across sources manually, that’s the kind of work where being wrong is expensive and AI shortcuts haven’t earned my trust yet.

I design the experiments and write the recommendation at the end of an analysis, because AI lacks the domain knowledge to decide which metrics are worth tracking and which trade-offs are worth accepting.

There are also moments where writing the code myself is just faster than waiting for AI to generate and review it. I’ve stopped forcing it.

The point isn’t to use AI for everything, it’s to use it where it actually helps. For small code updates and edits, I let it handle the work. For framing, judgment, and decisions, that part stays mine.

💭 Final Thoughts

When writing code becomes easy, deciding what to build becomes the real bottleneck.

A year ago, you could still get by as a primarily execution-focused data scientist, someone who writes the SQL and python codes, builds the dashboard, answers the request. I don’t think that’s enough anymore.

The value is shifting toward understanding the business, the KPIs, the system behind the product. Toward being the person who uses AI as an execution layer, rather than being the execution layer.

I’ve stopped thinking about it as replacement and started thinking about it as positioning.

That’s the part of the year that actually changed me.

Xoxo,

Kessie 🧚

How I Use AI as a Product Data Scientist (A Year In) was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


Code Like a Girl

Confessions of Building a Digital Wardrobe in C++

By Someone who is trying to learn C++

Continue reading on Code Like A Girl »


Code Like a Girl

The Evolution of Cybersecurity: From Simple Defenses to Intelligent Warfare.

Cybersecurity, intrestingly, didn’t start as the complex, high-stakes battlefield you know it today to be. It evolved quietly at first, and then, rapidly.

As technology became deeply braided into every aspect of human life. What began as basic system protection transformed into a continuous, intelligent fight against highly adaptive adversaries.

♦Photo by Boitumelo on UnsplashThe Early Days: When security was simply an afterthought

In the 1970s and 1980s, cybersecurity wasn’t a defined field. Computers were isolated systems, used mainly by governments, research institutions, and large corporations. The primary concern wasn’t external attacks , it was system functionality.

One of the earliest known cybersecurity incidents, the Creeper Virus, was more of an experiment than a threat. It displayed a simple message and spread across ARPANET. Shortly after, the Reaper Program was created to remove it , marking the birth of defensive security.
At this stage, security was minimal, and at large, experimental.

The Internet Era: Rise of Digital Threats

The 1990s changed everything. With the rise of the internet, systems became interconnected — and vulnerable.
Malware evolved from harmless experiments into destructive tools. Attacks like the ILOVEYOU Virus and the Melissa Virus demonstrated how quickly threats could spread globally, causing billions in damage.
This era introduced:

  • Antivirus software as a standard defense.
  • Firewalls to control network traffic.
  • Intrusion Detection Systems (IDS).

However, defenses were still largely signature-based; meaning they could only detect known threats. Attackers quickly learned to stay one step ahead.

The Modern Age: Sophisticated and Persistent Threats

As organizations digitized operations, cyberattacks became more targeted, strategic, and financially motivated and personal.
The emergence of Advanced Persistent Threats (APTs) marked a turning point. These weren’t random attacks — they were carefully planned campaigns designed to infiltrate, remain undetected, and extract value over time.
Incidents like Stuxnet showed that cyber warfare had entered the geopolitical stage. Meanwhile, ransomware attacks such as WannaCry disrupted healthcare systems, businesses, and governments worldwide.
Key advancements during this period included:

  • Security Information and Event management (SIEM) systems.
  • Endpoint Detection and Response (EDR).
  • Cloud security frameworks.
  • Zero Trust architecture.

Cybersecurity was no longer just IT’s responsibility — it became a business-critical function.

The AI Revolution: A Double-Edged Sword

Artificial Intelligence is now redefining cybersecurity on both sides of the battlefield.

AI enables:

  • Threat detection at scale through behavioral analysis.
  • Anomaly detection beyond known signatures.
  • Automated response systems that act in real time.
  • Predictive intelligence to anticipate attacks before they occur.
  • Machine learning models can analyze massive datasets far faster than any human team, identifying subtle patterns that signal compromise.
How AI Empowers Attackers

At the same time, attackers are leveraging AI to:

  • Automate phishing campaigns with personalized content.
  • Develop polymorphic malware that constantly changes form.
  • Bypass traditional detection systems.
  • Generate deepfakes for social engineering attacks.

Cybersecurity today is moving toward proactive and intelligence-driven defense. Some of the most impactful advancements include:

  • Zero Trust Security: Never trust, always verify — every access request is continuously validated.
  • Extended Detection and Response (XDR): Unified visibility across endpoints, networks, and cloud.
  • Cloud-Native Security: Protecting dynamic, scalable environments.
  • Threat Intelligence Platforms: Real-time global insights into emerging threats.
  • Security Automation (SOAR): Reducing response time and human error.

Organizations are shifting from “defend the perimeter” to “assume breach and minimize impact.”

♦Photo by Joshua Sortino on UnsplashThe Road Ahead: Cybersecurity as a Continuous Strategy

Cybersecurity is no longer a static solution — it is a continuous, evolving strategy.
The future will likely be defined by fully autonomous security systems, AI-driven cyber defense ecosystems, Increased regulation and global cooperation and a stronger focus on human factors and insider risk.

One thing is clear: cybersecurity is no longer about preventing attacks entirely — that’s unrealistic. It’s about resilience, speed, and adaptability in the face of constant threats.

Final Thoughts

The evolution of cybersecurity reflects a simple realistic truth: as technology advances, so do the risks that come with it.

The journey has been defined by continuous adaptation. Organizations that succeed are not those with the most tools — but those with the ability to evolve as fast as the threats they face.

Enjoyed the article? Give it a clap and share your thoughts in the comments.
Have a different perspective? I’d genuinely like to hear it.
Until then, stay safe and stay secure.😁

The Evolution of Cybersecurity: From Simple Defenses to Intelligent Warfare. was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


Code Like a Girl

From Jira Bug to Draft PR

I wanted bugs filed in Jira to turn into draft pull requests on GitHub without anyone needing to shepherd them through the middle.

That’s the one-line version. The actual version took about two weeks and ended up with four moving parts:

  1. A Lambda that takes a Jira webhook, classifies the ticket, mirrors it as a GitHub issue, and copies attachments to S3.
  2. A triage workflow that generates a repo map and decides, for every freshly opened GitHub issue, whether to assign Copilot coding agent or just post a diagnosis comment.
  3. A log analyser in dev-scripts/ for the heavier path, where attached logs need to be turned into a structured root-cause analysis first.
  4. Copilot coding agent itself, which opens the draft PR.

None of the pieces were especially hard on their own. Each one was some Python, some Terraform, and some agent instructions. The time went into the joins: Jira’s idea of valid JSON, webhook retries, Copilot’s token rules, S3 log links, and a model that decided to ask for more information instead of checking the repo.

So this is the long version. The small annoying bits are most of the story.

The shape♦Stage 1: The Lambda

The Lambda is the boring bit you only notice when it gets something wrong.

When a Jira ticket is created or updated, it receives the webhook, decides whether the ticket is actionable, and opens or updates the matching GitHub issue. It also carries over attachments or S3 links so the GitHub side has enough context to do something useful.

The classifier itself is mostly regexes and form fields. Not glamorous. The parts that slowed me down were the places where Jira, AWS, and GitHub all had slightly different ideas of what “simple webhook” meant.

Webhook payloads are user-controlled JSON, sometimes barely

Jira’s automation rules let you POST a custom JSON body to a URL. You write the body as a template and Jira fills in the values from the ticket. In theory, simple. In practice, the validator that decides whether your template is “valid JSON” is brittle in ways nobody documents.

Things I had to discover the slow way:

  • Some smart-values aren’t supported on every tenant. The literal {{issue.url}} text was being left in the body on mine, breaking the JSON.
  • Array-valued smart-values have to come last in their object, or the validator fails before you can even save.
  • Free-text fields like description blew the body up whenever a user pasted text with control characters or unescaped quotes.

I ended up bisecting the body field by field, saving the rule each time, until I found which smart-value was breaking it. The validator’s error message is basically the same regardless of which line is wrong.

What I now do by default: send the smallest possible payload — usually just the ticket key — and have the Lambda fetch everything else via the Jira API. One extra call per webhook is free. Debugging the validator is not.

Make it safe to retry, then assume it will be

Webhooks have at-least-once delivery. The Lambda can see the same event twice, see an update while a previous run is still in flight, or trigger itself by editing the same ticket. None of those should create duplicate GitHub issues or comments.

Three mechanisms, roughly:

  • A hash of the classification result, written back to the ticket. If the new hash matches the stored one, skip everything.
  • A sentinel label that says “the classifier just touched this.” The Jira rule excludes that label so the Lambda’s own writes don’t loop.
  • Reading the existing GitHub-issue mapping on every event, not just on updates.
Stage 2: The triage layer

By the time I reached the GitHub-issue side, the Lambda was mirroring tickets reliably enough that the next question was obvious: can Copilot do anything useful with them?

The naive plan was: assign Copilot coding agent to every issue the Lambda creates, let Copilot figure it out.

That plan falls over as soon as the first vague ticket arrives. Copilot coding agent is not a triage tool.

What Copilot coding agent actually does

When you assign Copilot to an issue, it:

  1. Reads the issue body and existing comments at the moment of assignment.
  2. Researches the repo in its own GitHub Actions VM.
  3. Drafts a plan.
  4. Opens a draft PR — success or otherwise.
  5. Requests review.

What it does not do:

  • Post “I need more info before I try”
  • Decide the issue isn’t fixable and abstain
  • Use your domain-specific tooling
  • Read comments added after assignment

If the issue is vague, you get a low-quality draft PR you’ll close. If the issue is a duplicate, you get a draft PR. If it’s a “the docs don’t make sense” question, you get a draft PR for that too.

Useful tool. Wrong contract for “triage every opened issue.”

The three-way decision

What I actually needed before Copilot ran was a small decision point:

ClassificationActionauto_fixableAssign Copilot, let it open a draft PRneeds_infoComment listing what's missing, don't assigndiagnosis_onlyComment with root cause + workaround, don't assign

Copilot only fires when there is a real fix to make and enough information to make it. Everything else gets a comment and stops there.

The triage model is Claude Sonnet 4.6 routed through the Copilot SDK: same billing surface as the coding agent, but chat completions instead of the cloud agent. In practice the pipeline uses two different shapes of agent. Claude does the messy issue reasoning. Copilot coding agent does the repo-aware code edit.

The token maze

This is the part I would shortcut hardest if I started over.

Copilot SDK has its own auth contract, separate from regular GitHub auth. The SDK does not accept:

  • GITHUB_TOKEN (the built-in Actions token)
  • ghp_* classic PATs
  • ghs_* GitHub App installation tokens

It accepts:

  • gho_* OAuth user tokens
  • ghu_* GitHub App user tokens
  • github_pat_* fine-grained PATs with Copilot Requests: Read

The fine-grained PAT path looks easy until you discover that org-owned fine-grained PATs don’t expose the Copilot Requests permission. There’s an open GitHub issue about it. If your repo is in an org, that path is blocked.

The OAuth route works but requires running a device flow, which is annoying when what you want is “give CI a secret and move on”. After two days of permission spelunking, I found the shortcut: the ghu_* token already exists on any machine signed into Copilot. It's sitting in ~/.config/github-copilot/apps.json. Pull it out, drop it into a secret, done.

That’s the SDK token. Then there’s the assignment token.

The Copilot coding agent assignment goes through a separate GraphQL call (replaceActorsForAssignable), and that one needs a PAT that can see Copilot in suggestedActors. The Actions GITHUB_TOKEN cannot — GitHub explicitly filters Copilot out of suggested actors for the Actions identity. This is by design: the same loop-prevention rule that stops Actions from triggering other Actions.

So I tried to consolidate. Use GITHUB_TOKEN for assignment, simpler workflow, fewer secrets. The error was crisp:

Copilot is not in suggestedActors — coding agent is not enabled
for this repository, or the token lacks the scope to see it.

Coding agent was enabled. The token just couldn’t see it.

Final shape: three tokens.

SecretWhat it doesToken typeCOPILOT_SDK_TOKENTriage + log analysis (Copilot SDK inference)ghu_* from local CopilotCOPILOT_ASSIGN_TOKENAssign coding agent to issueFine-grained PAT, repo-scopedGITHUB_TOKENComments, labels, gist fetchesBuilt-in Actions token

Three tokens for three different jobs. Annoying, but at least explicit.

The two paths

Once auth was out of the way, the workflow branched on a label:

TriggerPathissues.opened (no label)Generate repo map → Claude triage → comment → maybe assignlabeled: analyze-logsDownload log → run log_analyze.py → log-triage comment → maybe assign

Path A is cheap. The repo map gives the model project layout, Claude classifies the issue, and assignment is gated on confidence >= 0.7.

Path B is heavy. The Lambda renders log attachments as markdown links to S3 pre-signed URLs. When the analyze-logs label gets added, the workflow downloads the log and runs the multi-agent log analyser from stage 3. That already produces root_cause, possible_fixes, and code references, so there is no point asking a smaller triage prompt to rediscover the same thing.

Most issues take Path A. The expensive path only runs when there is a log worth spending time on.

Grounding the triage

The fix was not a smarter model. It was making the procedure less optional.

I’d already given the triage agent the same search_repo and read_repo_file tools that log_analyze.py uses. Tools alone weren't enough. The model treated them as optional. So the prompt got a numbered procedure:

  1. Extract every identifier from the issue body
  2. search_repo each one
  3. Follow the path-chain: registry → template → implementation
  4. read_repo_file to confirm the leaf
  5. Only then classify

I also added a small set of owner-to-file routing rules that I had internalised but the model had not. Things like “templates owned by namespace A live in config X, namespace B lives in config Y”. Encoding those cut a whole class of “model guessed the wrong file” misses.

Then citation discipline. diagnosis and copilot_instructions must include file:line references with before/after values, not vague paths. Vague paths gave Copilot a worse starting position than no instructions at all.

And one carve-out. The original needs_info rubric was too bug-report-shaped: repro steps, expected vs actual, environment. That is right for a crash, but wrong for a change request like "bump version to 7" or "rename flag X to Y". Those have no repro steps because they do not need any. The model was pattern-matching on missing bug fields and refusing to classify obvious edits as fixable. The carve-out is simple: when the body names an explicit target value, do not demand a repro before considering auto_fixable.

After all four edits, the same issue went auto_fixable → assign Copilot → draft PR. Copilot still does the work. The triage layer just stops getting in its way.

Single LLM vs orchestrated pipeline

I wrote about this gap before, in Computer Says No. It applies here too.

A vanilla LLM call on the issue body would have classified needs_info and stayed there forever: no tools, no grounding, no way to verify. The orchestrated version reads actual files, traces actual chains, and only then decides. Same model. Different shape.

The annoying part is that Copilot coding agent already does this internally. It researches the repo before drafting. That’s why assigning it directly worked on some issues my own triage was bouncing. The triage layer needed the same kind of grounding before deciding whether to hand off. Otherwise it was just a worse version of Copilot gating a better version of itself.

Once I made the triage agent use its tools the way Copilot uses its own, the pipeline started behaving the way I wanted: most issues either get a useful comment or a draft PR within minutes of opening.

Stage 3: The log analyser

Stage 3 is the heavy path the triage layer hands off to. I built it before the triage layer existed, because the bugs that mattered were arriving as megabyte-sized application logs and reading them by hand was killing my afternoons. By the time I needed a triage agent, this tool was already doing useful work.

The shape:

♦The split that matters

The line I kept coming back to was: deterministic where it can be, model-driven where it has to be. If you can compute something from the log without judgement, compute it. If it needs judgement, give it to a model with grounded tools. Try not to blur the two.

What that meant in practice:

  • Actor detection. Logs contain both the orchestrator side and the worker side, sometimes on the same machine, both logging under the same [orchestrator] tag. A regex over thread-name patterns determines which actors are present and which one to prioritise (worker-side first, because that's where root causes live). No model involved.
  • Window selection. Logs are 50–100 MB. Models can’t usefully read the whole thing. The deterministic layer offers anchors such as last_task, last_abort, and last_traceback, then slices the relevant ~500 lines. The model never sees the rest unless it asks for more.
  • Evidence ranking. Within the window, traceback frames beat worker-side exceptions beat protocol-level exceptions beat task-abort summaries beat generic warnings. This priority is hard-coded; the model can override it only with explicit reasoning. Without this, models default to “the first ERROR line is the cause” and you get diagnoses that point at the wrapper.
  • File reference extraction. If the log mentions sdk/foo/bar.py:247, the deterministic layer captures that and pre-loads the file as context. The model doesn't have to figure out it's relevant.

By the time the scout agent runs, it is looking at a couple hundred lines of high-signal log plus pre-resolved file references. Not the raw log. Not a generic instruction to “find the bug.”

The agent stack

The analyser uses three separate Copilot SDK sessions, with a different model for each role:

RoleModelWhyScoutgpt-5-miniCheap. Plans which files/searches matter. Doesn't need to reason deeply.Analystclaude-opus-4.6Strong. Does the actual root-cause reasoning with grounded repo tools.Reviewergpt-5.4Strong, different family. Challenges the analyst. Up to three rounds of disagreement.

The reviewer loop is the part I am most attached to. Without it, the analyst picks an answer and you take it. With it, the reviewer either accepts or sends a structured “no, here’s why I disagree” back to the analyst, which reruns with that as additional context. After three rounds, whatever they converge on is the answer. If they still disagree, an optional orchestrator model reconciles.

This is more expensive than a single-model call. It is also much better on the awkward 15–20% of investigations where the first-pass answer is plausible but wrong.

The tools, for real

The agents don’t get “use search_repo” as a hint. They get actual SDK-defined tools backed by Python implementations:

search_tool = define_tool(
"search_repo",
description=(
"Search the monorepo for lines matching regex patterns. "
"Use this to find relevant code when the supplied evidence is "
"insufficient to diagnose the issue."
),
handler=_handle_search_repo,
params_type=SearchRepoParams,
skip_permission=True,
)

_handle_search_repo does a real ripgrep-style scan over the checked-out repo, returns hits with path, line, text. read_repo_file reads bounded snippets (default 40 lines of context) from a file the model names. Path resolution allows relative paths or unique-filename suffixes — the model can ask for dataframe.py and the tool finds sdk/data/sources/dataframe.py if it's the only match.

The bound repo_root matters. The tool can't escape the checkout (path traversal blocked at the resolver layer), can't read absolute paths, can't see ignored directories. Read-only by construction. The agent has every relevant lookup it needs and zero ability to do anything destructive.

This is what makes the analyst’s diagnoses grounded. Every file path it cites came from a real read_repo_file result. Every code reference was a real search_repo hit. The output is still model-synthesised, but the raw material is real.

The instruction file

Domain rules about how to read these specific logs aren’t in code; they live in log_analyze_instructions.md, loaded automatically and appended to every agent's system prompt. The file is short, opinionated, and mostly negative — it tells the models what not to do:

  • “Treat GenericAbortError as a wrapper unless deeper evidence is missing."
  • “Do not report wrapper messages as the root cause if the selected window contains earlier causal evidence.”
  • “Prefer multiple small targeted investigations over one large unfocused pass.”
  • “If the model owner is not internal, bias toward the model input path, not opaque model internals.”

These were learnt the expensive way. The first version of the analyser kept reporting “GenericAbortError” as the root cause for every failure. Technically true, completely useless. The wrapper-error rule fixed that. The third-party model rule came after watching the analyst speculate about model internals it could not read, when the actual bug was in the data pipeline feeding the model.

The rule I took from this: domain knowledge belongs in instructions, not code. Encode the rule once in markdown and every agent in the stack inherits it. The --agent-instruction and --agent-instruction-file flags let me steer per-run without editing the repo.

Streaming and timeouts

Each SDK call has a timeout: 180s for scout, 420s for analysis/review. They also use streaming events. Streaming matters for two reasons: progress logs appear in stderr while the model is still thinking, and if a turn times out before completing, the partial content can often be salvaged instead of throwing the whole investigation away.

The fallback chain when a turn times out:

  1. Did we get a final assistant message before timeout? Use it.
  2. Did we accumulate any streamed parts? Concatenate and use them.
  3. Can we read the latest assistant message from session history? Use that.
  4. None of the above? Raise — the run is genuinely lost.

I built this after the third time a seven-minute analysis call basically succeeded but threw on the timeout boundary. The work was done; the SDK just had not formally closed the turn. The fallbacks recover that work.

What came out of building it

log_analyze.py taught me most of what the triage agent in stage 2 needed:

  • Tools beat prompts. Give the model real search_repo and read_repo_file, not a description.
  • Deterministic preprocessing wins. Don’t make the model read 50 MB; pre-rank evidence and slice the window.
  • Domain rules go in instructions, not code.
  • Multi-agent isn’t just “more is better” — it’s specifically scout-cheap, analyst-strong, reviewer-different-family.
  • Defensive parsing is part of the contract.
  • Streaming + timeout-fallback turns flaky into robust.

The triage layer reuses build_repo_tools() directly. It shares the same search_repo / read_repo_file implementations as the analyst. It gets the same grounding for free. That code reuse is why the triage prompt can stay fairly short: the heavy lifting is in tools the analyser already proved out.

Stage 4: Copilot assignment and draft PR

If you made it here, thank you. This is actually the easy part.

Once an issue is deemed auto_fixable, the workflow assigns Copilot coding agent. It analyses the request in the cloud agent environment and opens a draft PR.

The thing I like is that there is still a human review point, just later. The workflow does not merge code. It only spends Copilot/GitHub minutes when the triage layer thinks there is a real edit to make.

Some open questions readers might have

Why not use an off-the-shelf tool? The simple answer is that I didn’t want to. I had fun building this, and I learnt more by sitting in the annoying bits myself.

Could something like n8n have done this instead? Yes and no. It could have saved me time on the boring routing parts, and would have been a great choice if the pipeline was “Jira event in, GitHub issue out, maybe a Slack ping”. I still would have had to do my own work for the AWS infrastructure, the agent grounding needs custom code, and the Copilot auth dance still needs extra hip movement. I preferred the learning curve to be focused on building blocks rather than tools.

Why Jira? It is the workflow tool my company already uses. I wanted to minimise friction for non-engineer colleagues.

Why GitHub Copilot instead of OpenAI or Anthropic directly? Our code already lives in GitHub and we already have Copilot enabled, so it felt natural to try that route first.

Why do the S3 dance for logs? The bug reports already arrive with S3 links pointing to the relevant logs. Whatever orchestration tool I picked, I still had to get the logs out of S3 and into the analysis path.

Where it lands

The end-state is a pipeline that, on every Jira bug:

  • Mirrors the ticket to a GitHub issue with the right team’s repo
  • Mirrors any attachments to S3 with pre-signed URLs in the GitHub issue body
  • Generates a repo map for grounding
  • Routes the GitHub issue through Claude triage (cheap path) or log_analyze.py (heavy path)
  • Posts a structured diagnosis comment
  • Conditionally assigns Copilot coding agent when the issue is auto-fixable with high confidence
  • Marks the issue auto-triaged to prevent double-handling
  • Re-classifies and cross-repo-moves cleanly when the team label changes
  • No-ops idempotently when nothing’s changed

Two LLMs, three tokens, two paths, one Lambda, one workflow. Most of the value isn’t in the model calls — it’s in the gates between them.

If you’re doing something similar: don’t try to make Copilot coding agent a triage tool. It’s a fix tool. Build the triage layer separately, and let it decide whether to hand off.

And if you’re plumbing webhooks into AWS and wondering why your auth isn’t working — curl it directly, layer by layer. The error code you see in the audit log is rarely from the layer you think it is.

Have you wired Copilot agents into a custom workflow? I’d love to hear what auth maze you got stuck in — and whether your triage layer is gating better than mine.

From Jira Bug to Draft PR was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


Code Like a Girl

Why Creative Women in Nature-Tech Change the World Right Now

And neurodiverse youngsters too

Continue reading on Code Like A Girl »


Code Like a Girl

I Thought Dark Mode Was Just a Toggle. It Turned Into a Full-System Refactor

My website was technically done. So I thought: let’s just add one more thing.

Dark mode.

Developers love that right? I didn’t even use that many colors — it should be quick to swap them around.

And yet, it turned into a full-system refactor: it was typography, code highlighting, images and rendering behavior.

The first problem: hardcoded colors

The problem showed up immediately: the few colors I used were hardcoded everywhere. A heading had one hex value, and a paragraph had another. Changing the theme meant updating each instance manually. No, thank you.

The fix

So I introduced CSS variables and defined colors by their roles.

  • --text-primary
  • --text-secondary
  • --background-primary
  • --border

With this, a heading wasn’t “black” anymore. It was text-primary. A background wasn’t “white”. It was background-primary.

This sounds like a small change but it fundamentally changed how I approached styling. I stopped thinking in terms of individual colors across themes and focused instead on the role and intention of each element, with color as just an implementation detail.

At this point, I thought I was mostly done. I wasn’t even close.

Dark mode is not black

With a color system in place, the next step seemed obvious and trivial: invert it. Just change black to white and white to black.

Except it looked terrible. Who would’ve thought that having white text on black would feel so… bright? It was harsh and fatiguing. Everything started blending together — almost like I had suddenly developed astigmatism.

Turns out dark mode isn’t black and white. Maximum contrast does not make text readable.

The fix

Shades of grey.

Instead of pure white, I switched to light grey and text was miraculously legible again. For secondary text, an even softer grey.

Good dark mode was about tuning contrast, making it proportionate and layered.

And that’s all my problems solved, said no one ever.

♦White text on black was too much contrast on my screen. I felt it in my eyes.♦Subtle change to reduce contrast, making it easier to read over a longer period of timeEvery surface breaks differently

Even after fixing colors, the UI was still inconsistent. Different parts of the website broke in different ways

Typography (Tailwind)

I was using Tailwind’s typography plugin (prose) for my writing pages. It worked well in light mode. But once I introduced my own variables, things started conflicting. Headings, links, and inline elements were all pulling from Tailwind’s internal color definitions instead of mine.

Some styles updated, others didn’t. Fixing one element would break another. The abstraction broke down, and the complexity I’d tried to hide came rushing back.

The fix

I explicitly mapped Tailwind’s typography variables to my own. Instead of relying on defaults, I treated typography as part of my system.

Once everything pointed back to the same set of variables, things became predictable again.

Code syntax highlighting

I use a lot of code snippets, especially in my JavaScript event loop article series. Dark mode introduced a new inconsistency with code syntax highlighting:

  • Github light theme was unreadable in dark mode
  • Github dark theme didn’t look great in light mode

Who would’ve thought?

For a while, I assumed I had to pick one.

♦Github’s light theme in dark mode was impossible to read♦Github’s dark theme in light mode looked washed outThe fix

Use both and switch dynamically based on the mode. It sounds pretty obvious now, but at the time, I genuinely thought I had to choose.

Images

Images introduced a different kind of problem. Some worked fine. Others didn’t translate at all.

My hero image is of a sunrise. From the start, I imagined using a sunset version for dark mode. Did I create dark mode just so that I can use this image? Maybe.

Thankfully, this was easily implemented by including both images and switching between them based on the mode.

But my SVG diagrams were harder. I tried making their colors dynamic using CSS variables but it didn’t work reliably.

The fix

Instead of forcing everything to be dynamic, I created two versions of each diagram, one for each mode. It felt less elegant at first, but it worked better. Not everything should be dynamically styled.

♦Diagrams designed for light mode don’t translate automatically.The problem wasn’t only styling — it was timing too

After fixing all that, I refreshed the page for my moment of victory. A flash of light mode appeared before it switched to dark. It was subtle, but definitely there. And yes, the temptation to pretend that didn’t happen was definitely there too.

The browser was rendering before the correct theme was applied. By the time JavaScript the correct theme was set, the browser had already painted the wrong one.

♦The flash: light mode renders before dark mode is appliedThe fix

The theme needed to be determined before rendering. Moving the theme logic earlier removed the flash entirely. It was a small change technically, but it had a big impact on how the site felt.

What this changed for me

I thought I was adding a feature: a toggle button and a visual enhancement that sits on top of everything else.

But dark mode didn’t sit on top of my UI. It ran through it and every part of the system had to agree. None of the above was individually difficult. But together, they revealed that dark mode was a system and one that needs to be designed intentionally.

If you’re implementing dark mode

A few things I wish I knew earlier:

  • Define colors by role, not value
  • Avoid extreme contrast
  • Treat typography as part of your system
  • Don’t force everything to be dynamic
  • Handle theme selection before render

If you’re curious, the full implementation and visuals are on my site. This article was also originally published there.

I Thought Dark Mode Was Just a Toggle. It Turned Into a Full-System Refactor was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


Code Like a Girl

AI Agents Are Living the Michael Scott Dating Arc. And We’re All Watching.

The hype was Jan. The reality is a series of increasingly bad decisions. The good news? Holly is coming.

Continue reading on Code Like A Girl »


Greater Kitchener Waterloo Chamber of Comerce

Fearless Female (May): Dorothy Zubel

On the first Tuesday of every month, we’ll announce a new Fearless Female, including a video interview of them sharing their business story. Want to be featured as a Fearless Female?

Contact Memberships for more details. The Fearless Female Program would not be possible without our Title Sponsor, Scotiabank.

To learn a little more about the Scotiabank Women Initiative, and why they’ve chosen to sponsor this program, see the video below.

 

The Fearless Female we’re featuring for the month of May is Dorothy Zubel, Co-Founder, Chief Executive Officer of The Finance Group.

Dorothy Zubel is the Co-Founder and CEO of The Finance Group, where she leads the vision to redefine the future of finance through an insights-driven, technology-enabled model.

With over 15 years of experience across accounting, finance, and systems implementation, Dorothy has held senior finance roles at small, mid-sized, and large organizations. Today, her focus has evolved from client advisory to building a modern finance firm that leverages technology, including AI, to reduce reliance on manual processes and elevate the role of finance professionals.

Dorothy is passionate about transforming finance from a reactive, compliance-driven function into a proactive, insight-led discipline that delivers clarity, confidence, and peace of mind to business leaders. She is equally committed to breaking the glass ceiling for women in finance, creating opportunities for the next generation of leaders to thrive in a more innovative and inclusive industry.

As CEO, Dorothy is focused on scaling a values-driven organization that combines people, process, and technology to deliver meaningful impact — both for clients and within the profession itself.

Outside of work, she enjoys spending time with her family, traveling, and exploring new parts of the world.

To learn more about Dorothy journey as a Fearless Female, watch the interview below (or read the written format).

Tell us more about The Finance Group and your role at the company.

The Finance Group is a fractional finance firm. We’ve been around for about four years, but I’ve been working in the fractional finance world for about 12 years.

I came from corporate finance, and when I entered the fractional finance world, I noticed it was transactional in nature. A lot of people were just posting entries and spitting out financials to business owners, and business owners really weren’t getting the insights they needed into their finances. And so, I started delivering services in the fractional finance world the way I did in corporate finance, which was finding efficiencies, cost savings, teaching leadership how to read financial statements, and that really resonated and made a difference to the business owners that I was working with.

I subsequently met my business partner, Donna Gleha, and we wanted to bring that vision of fractional finance to a larger audience, and so we launched The Finance Group four years ago.

What inspired you to pursue the finance field?

Originally, I started working really young. So, I started my working career in retail and became a manager of a retail store and realized I enjoyed business but didn’t see a future in retail. It was grueling hours. So, I went back to university, and I did my Bachelor of Commerce at the University of Toronto, which led me to accounting and finance. After graduating from university, I started working with Enterprise Rent-A-Car because I loved their promote from within culture, as well as their leadership development program.

And so, keeping with that, within about seven months of working there, I was hired into their accounting department, and that’s where my real accounting journey began, really learning the ropes, so starting from the ground up, and was able to kind of move and develop through that role as well, and ultimately loved the fact that they taught their branch leaders and branch managers the financials to each branch and how it operated and their profitability. And that allowed those branch managers to effectively manage the branches and to really drive profitability of each individual branch.

And that kind of style I loved. I thought every owner should know how they’re doing, and so that’s what ultimately served me throughout my career, is making sure there was a deep understanding of the financials for most business owners.

How did your experience at University of Toronto prepare you for a leadership role?

Yeah, so a couple things. I mean, after university and after joining Enterprise, I did do my CPA. I was fortunate enough to do my CPA when it was the CMA, and they had a rigorous two-year, like, case program where you would analyze companies and how they were doing. I found that work fascinating.

And, you know, even coming up in my career through Enterprise, I was overseeing people as I kind of grew up, grew in roles there, and then ultimately started working in other small, medium businesses where I had a team reporting to myself, and that allowed me to develop some of my leadership style. Further to that, I think everything is about learning and growth, so I also work closely with a leadership coach where I continue to foster my leadership skills and my ability to be a successful CEO.

What are some of your accomplishments so far?

First off, I think I’m proud of the path I took in terms of taking a step back from regular corporate finance into fractional finance 12 years ago. I received a lot of discouragement from that strategy, and for me it was an extremely successful journey and an extremely empowering journey. And then, you know, obviously a huge milestone for me four years ago with my business partner Donna Gleha, launching the finance group and, you know, us able, being able to grow the business today with over 40 employees and continuing to grow is a huge milestone for us, and we’re extremely proud of it.

What are some of the challenges that you have faced so far?

So, as you’re scaling a business, there’s always challenges that you encounter. You know, we’ve had challenges from not always hiring the right people, from having cash compression issues, as well as not having the correct systems processes in place, you know, as we’re scaling the business. So over the four years, you know, when those opportunities have happened, I call them opportunities, you know, we’ve worked on not dwelling on the mistakes we’ve made, but instead recognizing how that mistake was made and taking corrective action to avoid it going into the future so that we can hit the bumps in the road, not have a car crash, but keep moving and looking at those in the rear view.

If you could go back in time, would you do anything differently?

I don’t think life is about regrets. I think life is about learning and growing from the decisions that you make. And I truly think that, you know, where I am today is where I am because of the path I took.

And so, if I were wanting to change something, it would take me on a different path. And who knows where that would lead?

What are some of the tools you used to grow as a leader?

You know, in part of my journey, even early on in my career, I sat as treasurer of my daughter’s co-op school, in which case for four years, I sat there and we relocated a school, applied for funding and moved to school. So, I don’t take anything half-heartedly.

So that community involvement really helped to keep that school going. So, it’s still open today because of those efforts, because based on their financial positioning, they wouldn’t have been able to sustain that.

I’m part of a peer group through Tech Canada. That team has been fantastic in terms of helping navigate challenges and to bounce ideas from, you know, it can be lonely at the top, so having that peer group. My leadership coach has been huge. And I would say two other things.

One is I have very great partnerships in my life, and that’s not just in my business, but also with my husband, who I’ve been married to for over 25 years, and my business partner, Donna Gleha, who, you know, we very much complement each other, have a lot of respect for each other, and more importantly, we’re also friends. So, I think all those things combined helped to get us to where we are today and get me to where I am today.

How do you define success?

I define success very differently than I would have said in the beginning of my career. You know, I think when you’re young, I think you’re pursuing the financial aspects of success. I’ve realized as I’ve gotten further along in my career and in launching the finance group, I get more satisfaction or view success through the lens of how we’re impacting the businesses we serve. I love to see the growth in the individuals who work on our teams and seeing them step into those leadership positions themselves, as well as I love alignment with my family and my work. Those are the things I really see as driving success now.

What are some of the core values that you have integrated into your business?

So, the core values that drive me are the same ones we have in our business, which are trust, integrity, accountability, curiosity, and being self-directed. You know, trust being the foundation to anything. You can’t start a relationship, you can’t work in finance, as well as in any relationship without that basis of trust. Those are businesses that are entrusting you with their financial position, and so we take that very seriously. Integrity and accountability kind of go hand in hand.

You know, you have to deliver on what you’re promising to deliver on, and you must stay true to, you know, the core self and to the business values. But what I find that really drives me is curiosity. That need to constantly learn and grow as a person and those around me, so I’m constantly the sharer of information, and that’s a value that I find has served me, and I continue to see it serve me throughout my career.

What are some of the strategies that you use to recruit talent and build teams?

The same way we approach fractional finance in terms of, you know, really making it feel like a shared services or an extension of our client’s team, we take the same approach in finance. Many finance people have been lonely throughout their career because they’re usually the last ones at the top of the food chain in accounting, and people come to them for answers, and the buck stops there. So, when they join the finance group, what they get is camaraderie and collaboration, something that they’ve been hungry for in the past.

So, we do really drive change through that. We work hard to build our teams. We invest in our teams and their growth.

We spend a lot of times having people understand what drives them, what makes them tick by using Colby and EQI as metrics and training through those. So that’s how we really build a strong team environment internally, and especially as a remote team, we must put an even higher focus on that.

I’ve been blessed that my business partner, Donna Gleha, was an expert in this sector. She sat in finance recruiting for many, many years, and so she has the unique talent of going and attracting that talent and finding the talent. And then for every 70 people we might interview, it’s only one person who’s really getting hired just because of how rigorous our process is to make sure that we’re equipping our clients with the highest caliber of individuals out there.

What are some of the benefits of establishing your business in Waterloo Region?

So, I would say the benefit of the Waterloo Region is obviously there’s a high concentration of our ICP, our ideal customer profile, right? There’s a lot of small and mid-sized businesses here, and the ones that I’ve worked with are really interested in growth, really interested in reaching that national stage.

And so, I’m always excited to work with, and so is my team, to work with business owners who are really looking to grow and expand and looking for that real financial basis to do that.

What inspires you?

It’s just making a difference. It’s making a difference in the companies that we serve our people. People are huge for me. I want to make sure we’re serving our people well. You would notice if you went to our website, we’re probably about a 25% to 75% split men to women, so we have predominantly women.

Part of what we do is really breaking that glass ceiling for women in finance as well. So, a lot of our leadership training, that’s kind of been one-size-fit-all in the past for people. We make sure that that’s going to resonate with both groups and that they’re able to lead effectively going forward and have the training that actually suits their style.

Did you see any differences as a women climbing the corporate ladder?

Absolutely. When you’re a woman coming up in the ranks in finance and accounting, there was a glass ceiling that you push up against. When you’re having children, there’s always that limiting talk around executives.

And so, I’ve seen lots of women who’ve been overlooked, not because they’re not capable or not because they don’t have the same skill sets, but maybe they’re not as loud or as vocal about their accomplishments as their male counterparts, which has held them back. So, this isn’t just a journey about saying, you know, women being held back. It’s also, you know, how do we help women and empower them to show up in the way that they need to, to be recognized for the skill sets that they have.

What advice would you give to other aspiring female entrepreneurs?

There was an interesting, and I’ll bring in a book. Malcolm Gladwell’s book, Revenge of the Tipping Point, talks about the 25% representation that must be in any group for voices to be heard. What we still see is a big disparity of females on executive teams, still not even close to that 25% voice.

So even as we break through the glass ceiling and those women get invited to the table, we’re still, it’s going to take time to build up to the 25%, you know, ratio to have your voice heard. So, what I would say to women is it’s not about being like a man. I would say if you can lead through curiosity and you can have, bring those difficult questions to the table that have the leadership to thinking in different ways, then you will be able to provide change and value to those organizations that you’re serving.

You don’t have to be the loudest, you don’t have to, but you can do it through curiosity and gain the same results.

What are some of the goals that you have for the future?

Yeah, we’d like to become one of the best-known fractional finance teams out there. That’s ultimately, you know, our goal to grow this business.

We’ve grown by 50% year over year, and we intend to keep growing at that rate. But more importantly, we want to make sure we’re keeping within our purpose of bringing that insights-based finance and that real rigor and structure and using technology and being, you know, on the forefront of finance to the businesses that we’re serving. We just started launching webinars, so we’re doing webinars for people to attend.

So, you know, check us out on LinkedIn and you’ll see links to those webinars there or reach out to me. I’ll have my contact info. And also, you know, as you’ll see over the next three to four years, AI is going to be transformational in finance, and the finance group is dedicated to making sure that they’re on top of those changes in technology.

And so, if you are looking to streamline or fix your finance department or make it more scalable, I encourage you to reach out to us to see how we might be able to support you.

What financial advice would you give to other business owners?

The challenge that I see for business owners is, that they are scared digging into or know much about their financial statements then get held back from getting funding from banks because they can’t tell the story of their business through their financial reporting. And so, having a strong financial reporting is key and one that really reflects their business. It should be a decision-making report they are receiving and if it’s not what they are getting, they probably need to explore how they can get what they need.

Where can viewers find out more about your business?

Go to our website or email me directly at: Dorothy.zubel@thefinancegroup-global.com

The post Fearless Female (May): Dorothy Zubel appeared first on Greater KW Chamber of Commerce.


Elmira Advocate

GROWTH AT ANY COST MANTRA

 

All around the world economic growth continues to be the alleged panacea of all our ills. Which is complete nonsense even on the face of it. Yes it would be nice to have enough food, water, shelter, health care etc. for everyone on the planet but it is an impossible task getting worse everyday. Why is that you ask? Well there are more and more people on the same size planet everyday. There is however not an automatic and corresponding increase in those life sustaining items everyday as the population continues to grow.  Combined with that is the fact that economic growth at least partially depends on population growth. In other words it's a case of having an ever expanding market for your goods be they foodstuffs, clothing or automobiles.

Population growth leads to some awful environmental problems from basic dumping and overflows from sewage treatment plants into our lakes and rivers as well as forest removal in order to grow more food for more people. Then of course we have climate change which certainly is attributed to more petroleum uses whether for heating our homes or running cars and trucks delivering more products to more people. Climate change including rising ocean levels as ice caps melt is causing havoc with flooding of coastal towns and cities. Greater heat worldwide is causing massive increases in both the numbers and magnitude of violent weather events resulting in more hurricanes and tornados destroying property and lives. Both droughts and floods are increasing in numbers and both affect food production which is already under stress.

Water shortages used to be an issue in third world countries. Now countries are looking for water sources outside their own borders. Water, whether groundwater or surface water, is in greater demand even as our industries continue to ignore pollution laws with little or no public recourse against them. Here in Elmira the now responsible chemical company are bragging about their fine "cleanup" work even as the Ministry of Environment rewrites the Control Order (1991) that was supposed to restore our local aquifers destroyed by Uniroyal Chemical. The deadline is 2028 and it won't happen although desperation may cause attempts to reopen parts of the aquifer to drinking water pumping while pretending to isolate other more contaminated parts. 

This is all our futures unless growth is more seriously limited and supervised. 



James Davis Nicoll

Singing Loud / Tripoint (Company Wars, volume 6) By C J Cherryh

C. J. Cherryh’s 1994 Tripoint is a stand-alone space opera. Tripoint is the sixth of Cherryh’s Company Wars books.

Twenty years ago, Austin Bowe of the ship Corinthian raped Marie Hawkins of the Sprite. Political considerations precluded any meaningful punishment for Austin or recompense for Marie. Marie was left with a son — Tom — and a burning desire for revenge.

Corinthian and Sprite being docked at Viking1 station, perhaps the time has finally come for vengeance.

The Backing Bookworm

Lady Tremaine


When I saw that this book was a retelling of Cinderella but from the 'wicked stepmother's perspective, I snagged a library copy. 
The Reese sticker on the front should have warned me off.
This book had an interesting premise but didn't deliver. Initially I was intrigued with getting the evil stepmother's perspective and perhaps seeing Cinderella in a new (and negative) light, giving credence to the stepmother's poor treatment of her stepdaughter. 
But the first third of the book is dull, excessively wordy with a weak plot and an ending that's tacked on like a bookish Hail Mary. I assume it was added to give some much needed (yet icky - iykyk) oomph to a slow-moving story that didn't have enough of the original fairy tale in it. 
I'm not sure what the author was trying to do here but it didn't work for me. Who is the reader supposed to root for? Ethel? Personally, I loved Sigrid (and Lucy the hawk) and that's not a good sign. 
Final Thoughts: Premise with potential. Awkward and uneven execution. It's a nope for me but I'm in the minority. 


My Rating: 2.5 starsAuthor: Rachel HochhauserGenre: Historical Fiction, RetellingType and Source: Hardcover from public libraryPublisher: St Martin's PressFirst Published: March 3, 2026Read: April 16-19, 2026

Book Description from GoodReads: A breathtaking reimagining of Cinderella, as told through the eyes of its iconic "evil" stepmother, revealing a propulsive love story about the lengths a mother will go to for her children
A widow twice-over, Etheldreda is now saddled with the care of her two children, a priggish stepdaughter, and a razor-taloned peregrine falcon. Her entire life has become a ruse, just like the manor hall they live grand and ornate on the exterior, but crumbling, brick by brick, inside. Fierce in the face of her misfortune, Ethel clings to her family’s respectability, the lifeboat that will float her daughters straight into the secure banks of marriage.

When a royal ball offers the chance to secure the future she desperately desires, Etheldreda must risk her secrets, pride, and limited resources in pursuit of an invitation for her daughters—only to see her hopes fulfilled by the wrong one. As an engagement to the heir of the kingdom unfolds with unnerving speed, she discovers a sordid secret hidden in the depths of the royal family, forcing her to choose between the security she’s sought for years and the wellbeing of the feckless stepdaughter who has rebuffed her at every turn.

As if Bridgerton met Circe, and exhilarating to its core, Lady Tremaine reimagines the myth of the evil stepmother at the heart of the world’s most famous fairytale. It is a battle cry for a mother’s love for her daughters, and a celebration of women everywhere who make their own fortunes.


Jane Mitchell

Getting Elected: The Anatomy of a Ward Campaign

Did you miss the Region of Waterloo School and/or the Women’s Campaign School? Here is a campaign school for everybody. Getting Elected: The Anatomy of a Ward Campaign   May 6, 6:30-8:30 p.m.   Fresh Ground Café, 256 King St. East, Kitchener(just north of the market) 

Join campaign experts and sitting councillors for a no-nonsense orientation to what it takes to run a campaign for municipal ward councillor. Simple but effective approaches to communications
Best practices for door knocking and working the crowd
No-fuss fund-raising
A roadmap from now to election day This free event is for candidates, potential candidates, and folks hoping to directly support candidate campaigns. 
   Registrationis requested
to  help us plan for the event, but if you see this at the last minute, you are still welcome to attend!   If you have any questions, please email president@waterloolabour.ca 


Elmira Advocate

ABUNDANT, ACCURATE & HONEST ADVICE WAS GIVEN TO UNIROYAL & SUCCESSORS : IT WAS ALL IGNORED AND DENIGRATED

 

Who gave them advice exactly? CEAC gave them good advice for years. The Region of Waterloo as well until they walked away unhappy with being ignored. Various bodies from the initially independent albeit stacked with Uniroyal supporters UPAC followed by CPAC (Crompton/Chemtura) followed by RAC/TAG and then TRAC all gave some good advice but other than the 2011-early 2015 CPAC none of the others ever stood up to the company and the Ministry (MOE/MECP) and demanded honest discussions, action and good faith from them. Chemtura were just like their predecessors incapable of doing any of that. 

Woolwich Township have always been far too sympathetic and deferential to the chemical company with the Oct. 2010 to Oct. 2014 council of Todd Cowan at least trying to support greater honesty at CPAC although even then Todd Cowan personally was a problem. Eventually as we all know he self destructed albeit with a little help. The GRCA can best be described as a joke which speaks volumes as to my lack of response to Doug Ford's clipping their wings. 

The advice given was based upon various reports and experts advising what needed to be done including the MOE from time to time, in between polishing Uniroyal's apple and kissing their feet . Advice included Source Removal including DNAPLS, pumping to both the on-site and off-site Target Rates determined by their own consultants,  removing dioxins, DDT and more in the downstream Canagagigue Creek and so much more. Off-site there were poorly managed DNAPLS (chlorobenzene) whether from Uniroyal or a now decades later admission to a second (unnamed) industrial source. There was also advice given to clean up their air emissions which literally took them years to get around to meanwhile harming local residents' (adults & children) health. 

The company's and the Ministry's (MOE) early on sweetheart deal (Oct. & Nov. 1991) set the tone for the company's continuing to be in charge of the "cleanup" from start to finish. The "cleanup" unsurprisingly has failed just like the company and Ministry have failed Elmira, Woolwich and downstream residents. 


Elmira Advocate

HOW MANY WOOLWICH COUNCILLORS WANT TO HANG AROUND FOR THE WATER BLAME ?

 

Well the chief architect of the Elmira cleanup failure over the last twelve years by the time the October elections roll around is running for the hills. She also bears some responsibility at the regional level for the water crisis throughout all of Waterloo Region. How many other regional as well as local councillors do you think will join the exodus? I would expect the mayors with more than one term as mayors and regional councillors might be thinking that this is a good time to hit the road. Is that only Barry Vrbanovic or is one of the other big city mayors a repeat culprit?

I expect that up here in Dogpatch (Woolwich) that there might be some small exodus of councillors although actually other than Bonnie Bryant, the others are all first term councillors. Hard to fault them horribly from one term's experience in which a quadruple term Sandy Shantz was leading the pack. She also spent a term as a councillor before her three terms as mayor. A small peccadillo derailed her for one term between her term as councillor and her three terms as mayor.  

I have spent years trying to figure out if she basically is a naive fool, easily swayed and manipulated by the likes of Dave Brenneman, Mark Bauman, Chemtura/Lanxess and other local big shot companies and individuals. Or has she with full knowledge ploughed ahead wrecking havoc on our environment and health by prioritizing growth and business at all costs?  Uniroyal and successors is not the only industrial dump  in Woolwich Township. Breslube, prior to Safety-Kleen, damaged our environments's air and water for extensive distances throughout the 70s, 80s and 90s. Safety-Kleen were always welcomed with open arms and glad handing by earlier Woolwich mayors including Bill Strauss who personally owned multiple contaminated sites related to the fuel industry. 

Perhaps we the citizens deserve both the environment and the mayors that we've had. Sebastian (TRAC) has very lately sent an excellent treatise on to some local environmentalists and unfortunately to a couple of wanna bees that may bite him. That could be unfortunate or it could turn out to be a blessing in disguise as he spends less time with those he refers to as deferential.  


Brickhouse Guitars

Pierre Explaining Assembly Mold - Interview From Boucher Guitars

-/-

Github: Brent Litner

brentlintner starred NousResearch/hermes-agent

♦ brentlintner starred NousResearch/hermes-agent · May 3, 2026 19:37 NousResearch/hermes-agent

The agent that grows with you

Python 134k Updated May 6


Github: Brent Litner

brentlintner starred plastic-labs/honcho

♦ brentlintner starred plastic-labs/honcho · May 3, 2026 19:35 plastic-labs/honcho

Memory library for building stateful agents

Python 3.3k 1 issue needs help Updated May 5


Code Like a Girl

How Senior Engineers Actually Debug (It’s Not What You Think)

Engineering Beyond Code | Part 3This skill can transform your engineering career♦Photo by Hitesh Choudhary on Unsplash

Most early engineers think debugging is about being fast, clever, or having seen the bug before.

It’s not.

Senior engineers don’t debug faster because they’re smarter. They debug better because they approach problems differently. What looks like intuition is usually a disciplined, almost boring process underneath.

1. They Don’t Start With Code — They Start With the System

A common instinct is to jump straight into the code and start scanning for issues.

Senior engineers resist that urge.

Instead, they first ask:
What part of the system could even produce this behaviour?

They mentally map the flow — request → service → dependencies → storage → response. Before touching a single line of code, they narrow down where the bug could logically exist.

Debugging, at its core, is a search problem. Seniors reduce the search space first.

2. They Form Hypotheses (And Try to Disprove Them)

Junior approach:
“I’ll try random things until something works.”

Senior approach:
“I think X might be happening because of Y. Let me prove myself wrong.”

This is subtle but powerful.

Instead of blindly trying fixes, they create small, testable hypotheses:

  • “Is this a data issue or a logic issue?”
  • “Is the bug happening before or after this service call?”
  • “Is this reproducible or intermittent?”

Each step is designed to eliminate possibilities, not just find solutions.

3. They Reproduce the Problem Reliably

If a bug can’t be reproduced, it can’t be debugged effectively.

Senior engineers invest time in:

  • Creating minimal reproducible cases
  • Controlling inputs
  • Removing noise from the system

They don’t rush to fix. They stabilize the problem first.

Because once a bug is reproducible, it stops being mysterious and starts being mechanical.

4. They Use Observability as a Tool, Not an Afterthought

Logs, metrics, traces — these aren’t just “nice to have.”

They are how senior engineers see the system.

Instead of guessing, they ask:

  • What do the logs say at each step?
  • Are there anomalies in metrics?
  • Where does the timeline break?

If visibility is poor, they don’t proceed blindly — they improve observability first.

5. They Avoid Fixing Symptoms

A quick fix that “makes the error go away” is tempting.

Senior engineers are cautious.

They ask:

  • Why did this happen?
  • What allowed this to happen?
  • Could this appear elsewhere?

They care about root causes, not just surface-level fixes.

Because debugging isn’t just about solving this bug — it’s about preventing the next one.

6. They Know When to Stop Digging Deeper

Not every bug needs a philosophical investigation.

Senior engineers balance depth with pragmatism:

  • If it’s a one-off issue → patch and move on
  • If it’s systemic → investigate deeply
  • If it’s unclear → isolate and monitor

They understand that engineering is also about time and trade-offs, not just correctness.

7. They Communicate While Debugging

Debugging is rarely a solo activity at senior levels.

They:

  • Share context early
  • Explain their hypotheses
  • Keep stakeholders updated

Not because they need help — but because debugging is also about alignment and trust.

The Real Difference

The biggest shift is this:

Junior engineers try to find the bug.
Senior engineers try to understand the system until the bug becomes obvious.

Debugging isn’t a talent. It’s a structured way of thinking:

  • Narrow the search space
  • Form and test hypotheses
  • Make the system observable
  • Focus on root causes
  • Balance depth with speed

What looks like experience is often just discipline applied consistently.

Distilled Principle

Debugging is not about being clever — it’s about being methodical under uncertainty.

And that’s what makes it a senior-level skill.

How Senior Engineers Actually Debug (It’s Not What You Think) was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


Code Like a Girl

Change Fitness: The Career Skill AI Can’t Replace

You don’t need to code to survive AI. You need Change Fitness. Here’s the 30% mindset framework that Harvard says will save your career.

Continue reading on Code Like A Girl »

Code Like a Girl

How to Know Your AI Feature Works Before Users Say It Doesn’t

AI Evals for Product Managers

Continue reading on Code Like A Girl »


Cordial Catholic, K Albert Little

Life's Big Questions: A Moment of Profound Realization #shorts #atheist #christian #apologetics

-/-

James Davis Nicoll

Sunset / The Inheritors By William Golding

William Golding’s 1955 The Inheritors is a stand-alone speculative paleoanthropology novel.

Lok’s community is a simple one: himself, Ha, Nil, Fa, old Mai, an old woman, little Liku, and a baby. As far as Lok knows, this single group of wandering gatherers are the only people there are. His life is similarly straightforward: Lok and his kin cooperate to search for food.

Life is about to become much more complicated.

The Backing Bookworm

This Story Might Save Your Life


This book has been all over the Bookstagram universe. It was a combination of a couple of genres (mystery and a titch of romance), but I'll be honest, the romance element surprised me and didn't quite win me over. Instead, I thought the strength was in the tension and the friendship between the two main characters. 
This is a solid suspense read with lots of red herrings and narrators Julia Whelan (one of my all-time favs!) and Sean Patrick Hopkins bring the connection between best friends Benny and Joy to life. The different sounding podcast elements were also a nice addition.
My wee beef: There's a pretty big secondary cast which made me pause as the twists were revealed as I tried to remember who that characters was in relation to the others. 
But when I initially finished this audiobook, I wasn't sure how I felt about it and had to sit with it for a bit. It was a good read in the moment but isn’t a book whose details will stay with me long. 
It's definitely twisty and it lives up to its hype ... but just barely.

My Rating: 4 starsAuthor: Tiffany CrumGenre: SuspenseType and Source: eAudiobook from public libraryNarrators: Julia Whelan, Sean Patrick HopkinsRun Time: 10 hours, 39 minPublisher: Macmillan AudioFirst Published: March 10, 2026Read: April 15-19, 2026

Book Description from GoodReads: Best friends Benny and Joy like to say they’ve been saving each other’s lives since the moment they met. Until the day Joy disappears and Benny is suspected of murder . . .
Benny Abbott and Joy Moore host one of the most beloved podcasts in the world. Each week, they delight listeners with a different “against all odds” survival story, gleefully finding the weird, life-affirming humor in near-death experiences. Since their first episode on Joy’s experience with severe narcolepsy, they’ve been the best friends everyone wants to befriend—and thanks to the meticulous management of Joy’s husband, Xander, they’ve built a lucrative empire.

The problem is, their next survival story may be their own. When Benny arrives at Joy and Xander’s one morning to record, he finds shattered glass and an empty house. The one clue shedding light on the couple’s disappearance is the incomplete, previously unseen first draft of Joy’s memoir. Benny is desperate to find them, even when the police soon zero in on him as their prime suspect.

Millions of devoted listeners think they know the “real” Benny and Joy. But as the hours tick by, and the odds seem increasingly stacked against Joy and Xander being found alive, not even the most devoted fans could guess the terrible secrets their favorite famous BFFs have hidden from the world—and from each other.



Kitchener Panthers

2026 SIGNING TRACKER: C Samuele Bruno

KITCHENER - The Kitchener Panthers are proud to announce the signing of the versatile Italian import Samuele Bruno.

Bruno has spent the last several years playing in Italy's top baseball league, Serie A.

He hit a career .288 in 89 games playing for four teams in Serie A. This includes hitting .286 in 18 games in 2024 with Grosseto.

This spring, he played for Hastings College (NAIA) in Nebraska, where he hit .350 in 45 games.

Hastings' season ended Friday with a loss in the Great Plains Athletic Conference tournament.

"Samuele's versatility on the field will play a big factor for us this year," said general manager Shanif Hirani. "Being able to catch, as well as play the infield and the outfield will be extremely valuable.

"He has high level experience playing in Italy and is coming off a successful collegiate career. I'm excited to see that translate with us this season."

============

SAMUELE BRUNO

Bats/Pitches: R/R

Hometown: Nettuno, Italy (IMPORT)

Birthdate: July 23, 2003

Pronunciation: Sam-WELL-eh BRUNO


Elmira Advocate

IS JUSTICE CRAIG PARRY ANOTHER JUSTICE ROBERT REILLY ONLY 2.0 ?

 

I have noticed both the similarities and the differences. Both judges are local (Waterloo Region). Both are arrogant know it alls beyond belief. Both are stupid enough to think that either everybody in the world will believe their horseshit or that those who don't, do not matter.

The Dishonourable Robert Reilly dismissed the testimony of seven parent witnesses who supported me and my wife at trial. He blamed me alone for allegedly holding them under my spell and inducing them to falsely testify against the Plaintiff who sued both myself and my wife. Justice Craig Parry on the other hand dismissed the evidence of 48 female witnesses against Dr. Jeffrey Sloka. Justice Parry claimed that ALL the witnesses were unreliable however he blamed Waterloo Regional Police, the media and the prosecuters for having misled them in a variety of ways. I guess if you don't like multiple witnesses testimony than you need to scapegoat somebody.

Oh and my case was a civil case not a criminal case plus the co-accused (my wife) was found NOT liable as there was absolutely not one single shred of evidence presented against her.  When I refused to put her on the stand in my defence both the Plaintiff's lawyer and the asshat Judge went ballistic.  My in hindsight conclusion was that they were hoping to get her to say something on the stand to implicate herself because they had absolutely no evidence against her.

I had hoped and thought that Justice Reilly was the only local piece of crap Judge we had. I was mistaken apparently. I wonder if anybody at trial asked the police whether or not there were any male patients of Dr. Sloka (neurologist-brain, nerves etc.) who had complained about inappropriate and or intimate body examinations.  If not that kind of looks bad on the doctor don't you think? I also wonder how Justice Parry satisfied himself that Dr. Sloka's intimate bodily examinations were appropriate?  The Crown Prosecuter's witness a Neurologist said it was not appropriate. No other independent and certified witness said otherwise yet our Justice Craig Parry seems confident enough to substitute his judgement for that of a well known, certified professional Neurologist PLUS the opinions of the College of Physicians and Surgeons who permanently stripped Dr. Sloka of his medical license for his behaviour with female patients.

If if looks like a duck ....if it waddles like a duck ....and if it quacks like a duck ...it probably is a duck or in this case he probably is two letters past d....  and rhymes with duck.  


Brickhouse Guitars

Coffee Break with Kyle & the Godin Century

-/-

Brickhouse Guitars

Coffee Break with the Boucher GR-SG-161T

-/-

Cordial Catholic, K Albert Little

Who Has the Authority? #Bible #apologetics #Christian #church

-/-

Child Witness Centre

Youth Symposium has Huge Impact for Over 1,600 Students

Going through adolescence can be tough enough for many youth – but especially during these uniquely challenging and quickly evolving times we’re in! That’s why our team at Child Witness Centre (CWC) is delighted our recent Youth Symposium was successful in making a huge impact.

In the media! Read this wonderful GuelphToday article by Taylor Pace. Also, check out our social media posts recapping day 1, day 2, and day 3 of Youth Symposium!

The 19th annual edition of this very special program for local youth has just wrapped up! Our Youth Symposium began small in 2003 but has grown to benefit many hundreds of young people every year! This time around, a total of over 1,600 grade 8 students and their teachers attended. They came from almost 30 local schools – public, Catholic, and private – from across Waterloo Region, Guelph, and Wellington County.

Large and Dynamic Initiative

Strategically scheduled in April, this powerful preventative and outreach initiative reaches grade 8 students as they prepare academically and personally for high school and ultimately the person they’re becoming. The interest level from schools is always high! This one-day educational experience is run over three days – to accommodate the large volume of students. Those in attendance heard dynamic presentations aimed at making a big impact in their lives.

New this year was the Symposium being delivered in a single-stream, large-venue format, while remaining grounded in the same values and focus that educators have come to trust. For the first time in our program’s history, the event took place at University of Guelph (inside Rozanski Hall) on April 23 and at Wilfrid Laurier University (inside Lazaridis Hall) on April 28 and 29. In most recent years, the program took place in movie theatres, by spreading out the classes between multiple cinemas, and far more speakers involved. But with theatre renovations and fewer seats being available, it was time to reimagine what the program could look like.

Powerful Messages Delivered

The featured speakers were a few of the best in the country for youth audiences. Chris Gray, Christene Lewis, and Jeff A.D. Martin leveraged their incredible lived experiences and gift of storytelling, while pouring themselves into their very meaningful and inspiring presentations. They encouraged living with purpose, hope, optimism, confidence, and kindness – while also dreaming big and pursuing goals. Themes included mental wellness, self-worth, resiliency, belonging, healthy relationships, positive thinking, and strong decision-making.

Our CWC representatives also shared a key message about the supports we offer if a young person becomes a victim of abuse or crime. A big highlight was our accredited facility dogs, Brady and Monet, appearing on stage. Last year alone, our agency supported 975 children and youth through the criminal justice system, along with 770 caregivers, to heal and move forward from their trauma.

Why This Program Matters

Through the amazing messages and interactions shared, there’s sure to be a mighty and lasting ripple effect in our community. Students were encouraged to overcome their challenges, recognize their inherent worth, shift to a champion mindset, seek help if needed, and understand the power they have in determining their own path forward. By taking place on university campuses, the event is also expected to help students envision themselves in a post-secondary setting in the future. For now, educators can carry the core messaging and themes back into their classroom discussions.

Glowing feedback for Youth Symposium has been received from many students and teachers over the years, including at this edition. One student said, “My experience was amazing. The presentations were all very inspiring, funny, and informative.” A teacher said, “Youth Symposium is one of the best field trips that a teacher could take their grade 8 class to.”

An Abundance of Gratitude

Our team at CWC would like to thank everyone who helped pull off an immensely energizing, meaningful, and unforgettable initiative. Their Youth Symposium wouldn’t be possible without generous financial supporters and many volunteers. The program sponsors this year were KW Sertoma Club, Fergus-Elora Rotary Club, the Barb and Greg Billo Fund held at Waterloo Region Community Foundation, and the Brian and Pauline Fisher Fund held at Waterloo Region Community Foundation.

As we see things, youth not only deserve our support, but are 100% our future! That makes them a priceless investment of our time and resources. The reward is broad, deep, and lasting in our community.

The post Youth Symposium has Huge Impact for Over 1,600 Students first appeared on Child Witness Centre.


Github: Brent Litner

brentlintner pushed vim-settings

♦ brentlintner pushed to master in brentlintner/vim-settings · May 1, 2026 18:22 2 commits to master
  • adc5084
    Bump some packages
  • 79366c1
    Fix nvim tree bg no longer sticking as transparent

Elmira Advocate

MAYOR SHANTZ"S LEGACY IS LIKELY PROBLEMATIC

 

I suggest that her legacy is problematic mostly based upon her efforts regarding Elmira's water situation. Yes I am aware that the Woolwich Observer and others have been warning taxpayers here in Woolwich Township for decades about profligate municipal spending. Those folks have also likely suggested either empire building or at least unnecessary municipal hires which further add to the Township's financial burden when wages, benefits and long term cost of living increases are all included.

It is also possible that there are other issues such as local development both in Breslau and Elmira that may have offended many citizens. I do not however pretend to be either an expert on the intricacies of development nor on waste water infrastructure. I do have considerable knowledge however on water supply including both municipal, regional and even individual wells on private residences.

For myself there is one other major area of recent dispute that Mayor Shantz possibly cleverly avoided at all costs and that is the Region of Waterloo's mishandling of  Wilmot's water supplies. Keep in mind that Mayor Shantz has also been a regional councillor for the past twelve years and should have stepped up and provided some leadership within regional council. Absolutely none was heard or reported upon at regional council meetings that multiple media have been attending vigorously for at least the last six months. 

I have a colleague here in Elmira who has resoundingly been repeating the following mantra namely that Ms. Shantz's actions and decisions over the last eleven years have guaranteed the now universally accepted failure to clean up the Elmira Aquifers by 2028. While the "cleanup" has been irregular, inconsistent and inadequate for decades, the last chance to turn things around was in 2015 after the previous year's election which made Ms. Shantz mayor. Unfortunately, likely with bad advice (Brenneman & Bauman), she did absolutely everything wrong to restore the public's aquifers and everything right to minimize the polluter's environmental expenses and to polish Chemtura's and Lanxess's public images. 

She has announced that she won't run in this fall's election. While that is a long overdue blessing I am not sure that local powers that be will not have multiple replacements waiting in the wings to solidify their influence generally at the expense of the public interest.  



Code Like a Girl

Advocate For Paid Parental Leave, and Other Actions for Allies

Better allyship starts here. Each week, Karen Catlin shares five simple actions to create a workplace where everyone can thrive.♦1. Advocate for paid parental leave

Paid parental leave is already too rare in the United States. And now some employers are scaling it back.

Only 27% of private-industry workers have access to paid family leave, according to the US Department of Labor.

And recently, some high-profile companies that do offer paid leave for new parents have announced reductions. Zoom is cutting theirs by about 6 weeks, to 18 weeks for birthing parents and 10 weeks for non-birthing parents. Deloitte is cutting its parental leave from 16 weeks to 8 weeks.

As former Google executive Laszlo Bock noted, when one company cuts benefits, it can normalize others doing the same.

Yet paid parental leave is an important retention strategy. Catalyst reported earlier this year that 42% of women who voluntarily left their jobs said caregiving responsibilities, including childcare costs, drove their decision to exit the workforce.

There’s also a business case. The cost of one regrettable departure can be greater than the cost of providing leave, including lost productivity, hiring, and onboarding. Estimates range from 50% to 200% of someone’s annual salary.

Consider how you can advocate for protecting or improving paid parental leave. Who can you raise this issue with within your organization or union?

Share this action on Instagram, LinkedIn, or YouTube.

2. Encourage taking the full leave

One reason organizations scale back parental leave may be that leaders misunderstand it. They might view it as unnecessary, optional, or even a vacation.

I’m still bothered by something a product manager said many years ago when I was about to go on maternity leave: “When Karen gets back from her holiday…”

That mindset matters. As gender equality researcher Siri Chilazi recently noted: When men in leadership take their full leave and talk about it openly, it changes what feels normal and possible. The more who do it, the less controversial it becomes.

Folks, let’s not disparage male coworkers and other non-birth parents for taking parental leave. Let’s encourage them to use the full benefit, support their time away, and welcome them back without judgment.

3. Document marginalized lives

When members of underrepresented or marginalized groups are left out of the historical record, their contributions can be forgotten.

I appreciated learning about UC Berkeley professor Juana María Rodríguez, who asks her students to create and edit Wikipedia articles about LGBTQ+ people, history, and current issues. Together, they’ve made more than 300,000 edits.

Rodríguez’s goal? To prevent erasure.

It also reminded me of physicist Jessica Wade, who has personally added the biographies of more than 875 women scientists to Wikipedia so their contributions are recognized globally. She shares tips and advice for others who want to get started on a TED Talks blog post.

You can help, too.

Think about a leader, author, or someone in your industry who is a member of a marginalized group. If Wikipedia has an entry for them, can you make an edit to reflect their most recent accomplishments? If there isn’t an entry yet, follow Jessica Wade’s advice on how to create one.

Or, if you work in higher education, consider partnering with Wiki Education. They help faculty integrate creating content for Wikipedia articles into their curriculum.

(Thanks to Bernadette Smith’s Equality Institute Newsletter for introducing me to the UC Berkeley initiative.)

4. Treat AAPI racism seriously

May, which is Asian American and Pacific Islander (AAPI) Heritage Month in the United States, is a good reminder to focus on the inequity AAPI people can face in our workplaces. One issue? The false assumption that Asian employees are “doing fine” and not impacted by discrimination.

That belief is tied to the model minority myth. It’s a harmful stereotype that Asian people are uniformly successful and therefore less affected by bias.

As allies, let’s push back if we hear someone saying, “They’ll be fine,” or otherwise trying to downplay reports of harassment against employees of people of Asian and Pacific Island descent. We can ask, “What makes you say that?” and hopefully turn the conversation towards supporting those employees and addressing the discrimination.

5. Community spotlight: Point out the obvious

Sometimes, allyship is as simple as pointing out what others miss.

When newsletter subscriber Lisa had to withdraw from a panel, she recommended a woman colleague with equivalent experience as her replacement. Instead, the organizers chose a man. It just so happened that he didn’t represent the same type of organization as Lisa and her proposed substitute.

Lisa didn’t let it slide. She replied that their revised panel would now be all male and less diverse in another important way, too.

The organizers reconsidered and added her recommended panelist to the lineup.

Lisa summed it up well: “Sometimes it requires pointing out the obvious.”

Thank you, Lisa. What feels obvious to some of us often isn’t visible to others.

If you’ve taken a step towards being a better ally, please reply to this email and tell me about it. And let me know if I can quote you by name or credit you anonymously in an upcoming newsletter.

That’s all for this week. I’m glad you’re on this journey with me,

Karen Catlin (she/her), Author of the Better Allies® book series
pronounced KAIR-en KAT-lin, click to hear my name

Copyright © 2026 Karen Catlin. All rights reserved.

Being an ally is a journey. Want to join us?

  • Say thanks to Karen and buy her a coffee (Need a receipt for educational reimbursement? Send us an email, and we’ll take care of it.)
  • Follow @BetterAllies on Instagram, Medium, or YouTube. Or follow Karen Catlin on LinkedIn
  • This content originally appeared in our newsletter. Subscribe to “5 Ally Actions” to get it delivered to your inbox every Friday
  • Read the Better Allies books
  • Form a Better Allies book club
  • Tell someone about these resources

Together, we can — and will — make a difference with the Better Allies® approach.

♦♦

Advocate For Paid Parental Leave, and Other Actions for Allies was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


House of Friendship

Slime-based Learning at Victoria Hills

Every week at Victoria Hills Community Centre, students have the chance to broaden their education at House of Friendship’s Adventure Learning program.

Held after school, Adventure Learning encourages schoolchildren in Grades 2 to 8 to explore new subjects, by making math, language, and science exciting and fun.

What better way to learn about the world than through slime? Matias enjoys a hands-on lesson at Victoria Hills Community Centre.

“We try to build a program based on what the kids are interested in,” said Kathleen Cameron, Neighbourhood Program Leader. “We usually have a video, where we share the science behind the activity.”

One week in February, the young students discovered how Silly Putty was invented by mistake. The children watched a short video about the inventor, James Wright, who tried to make a synthetic rubber substitute to support American war efforts in the Second World War. Instead, he created a bouncy, stretchy substance that eventually became a toy beloved by children throughout the world.

It was after watching the video, however, that the real fun began – the students all took part in making their own version of Silly Putty, slime.

They had to follow the recipe carefully, measuring out the ingredients – glue, baking soda, food colouring, and the secret ingredient, contact lens solution (containing boric acid and sodium borate). And while some of the students may not have measured as carefully as they should have, they still learned something in the process. The creation of slime, much like any other concoction, requires attention to detail.

It was a little bit messy, but it was worth it.

In addition to this kind of hands-on learning, the students also enjoy some physical activity in the gym, and have also taken part in spelling and math activities as part of the afternoon. Snacks are provided, recognizing that some of the children might come from households struggling to make ends meet, and after-school snacks aren’t always available at home.

This free program provides more than just support to the children’s education. It’s also providing a community.

“It’s fun to work with the kids,” said Kathleen. “I love seeing them interact with each other. They get to know us, come out of their shells, and meet some healthy role models while they are here.

“This is the neighbourhood I live in, and I’m happy to be there for my community. I get to help my neighbours here, and that’s wonderful.

Your support of Neighbourhoods work at House of Friendship is helping young students in Waterloo Region all year long! Your care and compassion will have a lasting impact on the lives of children who need support as they learn and grow. Thank you!

Donate today to help kids access educational programming!

The post Slime-based Learning at Victoria Hills appeared first on House Of Friendship.


Code Like a Girl

The Best of Code Like a Girl: April 2026

♦Image Created with ChatGPT

Here are the best stories from Code Like a Girl for April. They have been selected from everything we’ve published on Medium and Substack.

We use each platform differently.

Medium is where we publish more widely.

Substack is where we concentrate our strongest work. Only three stories a week, thoughtfully chosen and actively amplified.

Most of our Substack stories come from writers who don’t publish on Medium, so there’s very little overlap between the two.

If you’re only reading us here, you’re missing part of it.

You can find us on Substack here: substack.com/@codelikeagirl

From Our Substack CommunityWhy I left Meta

By Britta

Britta spent six years at Meta, loved the work, believed in the people, and left with her lowest rating still being Exceeds Expectations. What broke her wasn’t the workload or the timezone or the VR pivots. It was watching a culture that once rewarded empathy and honesty quietly reorganize itself around ego, silence, and unchecked power.

26 Authors. 14 Countries. Zero Budget

By Karen Smiley and Dinah Davis

26 women across 14 countries wrote a book together with no budget, no publisher, and no paid staff.
Every role, from editing to cover design, was filled by volunteers who showed up because they believed in what they were building.
This is what happens when women in tech stop waiting for permission and start creating the infrastructure themselves.

The Feedback That Taught Me Everything About Women and Power

By Emanuela B

An HR rep told Emanuela she takes up too much space, and she took it as the best compliment she’d ever received.
This piece unpacks the impossible pendulum women face in corporate life: too quiet and you disappear, too present and you become a problem. And it asks a harder question about mentorship: what if the advice women pass down is just a blueprint for shrinking?

From Our Medium CommunityHow to Improve Strategic Thinking for Effective Leadership

by Vinita

Most leaders are too busy firefighting to ever think strategically, and Vinita argues that is not a time problem, it is a priorities problem.

This piece breaks down exactly how to carve out the thinking space, make the hard trade-offs, and challenge the assumptions that keep organizations stuck in reactive mode. If you lead people and feel like you are always one step behind, this one is worth your time.

AI Agents: The Payback Tech Never Saw Coming

by Patricia Gestoso

The AI agent boom is not really about technology. It is about users finally getting revenge on an industry that spent decades telling them they were too stupid, too complicated, and too demanding to deserve software that actually worked for them.

Patricia Gestoso makes a sharp case for why agents are a symptom of massive customer dissatisfaction, and what tech has to do to earn back the trust it never bothered to build.

Claude Code at Scale: What Actually Works (and What Doesn’t)

by Nidhi Gahlawat

Nidhi Gahlawat spent two months using Claude Code on a massive Microsoft production codebase and came back with an honest verdict: it is genuinely useful, occasionally impressive, and wrong just often enough to keep you paying attention.

She breaks down exactly where it earns its place, PR reviews and boilerplate, and where it will confidently waste an hour of your time. If you use AI coding tools at work, this is the real-world take you actually need.

The Best of Code Like a Girl: April 2026 was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


James Davis Nicoll

Start Again / A Long and Speaking Silence (Singing Hills, volume 7) By Nghi Vo

2026’s A Long and Speaking Silence is the seventh volume in Nghi Vo’s Singing Hills secondary-world fantasy.

A former novice, now Cleric Chih, travels to Luntien in search of stories. Chih will learn many useful lessons, starting with keeping a better eye on their wallet than they actually did.


Cordial Catholic, K Albert Little

The Christian Author Who Searched Every Denomination Then Became Catholic (w/ Traci Rhoades)

-/-

Elmira Advocate

HISTORY REPEATS ITSELF BOTH AT TRAC AND WITH WILMOT TWN. & WATERLOO REGION

 

I TOLD THE LIARS & DECEIVERS AT CPAC BACK AROUND 2005 THAT THERE WAS  CHLOROBENZENE DNAPL OFFSITE BY THE HOWARD AVE. WATER TOWER - They denied it then, now they admit to the chlorobenzene and "residual" DNAPL.

WATERLOO REGION PLAYED DECEPTIVE GAMES WITH WILMOT'S WATER BACK IN THE 1970s. They initially denied it, then admitted it. They are doing the same thing all over again.


Parents stop lecturing your children to stop lying. Clearly we are always going to need politicians and clearly they are always going to lie to us. If your child likes to lie then nurture that skill and hope for the day when they lie to further your interests versus the public interest.

Today's K-W Record has the front page story and headline titled "Former Wilmot mayor watches history repeat itself". Clearly back in the early 1970s the City of Kitchener had absolutely no problem robbing Peter of water to quench Paul's thirst. This continued until Wilmot stood on their hind legs and gave the Kitchener bullies whatfor. Agreements were made including that the Region would pay any Wilmot residents' costs required to drill deeper wells due to Kitchener drawing them down. 

Similar bullies, polluters, politicians and long compromised regulators (MOE/MECP) have infested UPAC, CPAC initially, RAC, TAG and TRAC, all alleged public consultation bodies.  I presented very strong evidence to the Chemtura Public Advisory Committee prior to 2007 that actually suggested that what Uniroyal's consultants had found was likely DNAPL (Dense Non Aqueous Phase Liquid) made up of chlorobenzene and other contaminants.  As with pretty much all conclusions regarding contamination and cleanup it was based upon hard evidence actually provided by Uniroyal and corporate successors own, client driven consultants. In this case it was published in their monthly Progress Reports and examined a surprising discovery found one hundred feet below ground surface in well OW57-32R very near the Howard St. water tower. It has been vehemently denied for decades despite pumping well W4 being installed right beside it in order to speed up the dissolution of the DNAPL as well as keep the dissolved plume from spreading further under Elmira. Shortly after pumping well W4 was shut down, perhaps a little prematurely, downstream pumping well W3(R) and nearby observation wells such as CH75 began exhibiting increases in chlorobenzene. Hardly any surprise at all under the circumstances. Then in 2017 or 2018 Dr. Neil Thompson dropped the first bomb by advising that there was a lot more chlorobenzene in the Elmira aquifers than anybody had expected. By 2025 Jesse Wrighte of Arcadis Inc. advised that there were other sources of chlorobenzene located near the former Borg Textiles and the former Varnicolor Chemical. Allan Deal of GHD on behalf of Lanxess, also less than a year earlier, had advised as per the Minutes of a September 2024 TRAC meeting that nearby residual DNAPL was now dissolved. OH MY GOD BUT THE LYING BAST*RDS JUST CAN"T TELL THE TRUTH EVEN WHEN IT'S BITING THEM IN THE *SS.  Residual DNAPL is the tail if you will of passing free phase DNAPL that is no longer continuous as in a "pool" of DNAPL.

This deceit, lying and manipulation of the truth has been the never ending story of the Elmira Water Crisis and our politicians not only have failed to call the polluter (Uniroyal/Crompton/Chemtura/Lanxess and regulator (MOE/MECP) on it but  have enabled them throughout the last 36 1/2 years.

 


Grand River Rocks Climbing Gym

May Sale

The post May Sale appeared first on Grand River Rocks Climbing Gym.


Grand River Rocks Climbing Gym

May Sale

The post May Sale appeared first on Grand River Rocks Climbing Gym.


Becca Grieb

Meta Ads in 2026: What's Working, What's Dead, and What to Do Next

If you've been running Meta ads for any length of time, you already know the feeling: something that worked beautifully six months ago suddenly stops performing, and you spend the next few weeks trying to figure out why.

Meta's ad ecosystem evolves constantly — the algorithm changes, the creative formats shift, privacy regulations reshape what data you can actually use, and your competitors are running more ads than ever. Keeping up isn't optional anymore. It's table stakes.

I work with ecommerce brands and growth-stage companies on their paid media strategy, and Meta ads are almost always part of the conversation. Here's my honest read on what's working right now, what I've stopped recommending, and what I think the next 12 months look like.

‍ ‍

What's Actually Working in 2026

Broad Targeting (Yes, Really)

This one still surprises people, but the data keeps confirming it: broad targeting — or what Meta now calls "Advantage+ Audience" — is outperforming narrow, interest-based audiences for many advertisers.

This is a direct result of how much Meta's AI has improved. The algorithm is genuinely good at finding the right people now, often better than we are at manually defining who those people are. The days of stacking 15 interest layers and congratulating yourself on your laser-targeted audience are largely over.

What does this mean practically? Give Meta the creative, give it a budget, let the audience targeting stay wide, and let the algorithm learn. Test with strong creative rather than trying to engineer the audience.

The caveat: Broad targeting works best when you have enough conversion data for the algorithm to learn from. If you're a newer advertiser or running a brand with limited purchase history, you may still benefit from some audience signals to start.

Video Creative — But Specifically This Kind

Not all video is created equal right now. What's performing is short-form, native-feeling content that fits how people actually use the platform. Think lo-fi over produced. Think the way a creator would film something, not the way a brand would.

The content that stands out right now is: fast hooks in the first two seconds, authentic product demonstrations, creator-style testimonials, and content that doesn't immediately scream "this is an ad."

Polished 30-second brand films? Unless they're genuinely exceptional, they're getting skipped.

Advantage+ Shopping Campaigns

For ecommerce brands specifically, Advantage+ Shopping Campaigns (ASC) have become one of the best-performing formats available. Meta handles much of the heavy lifting — audience, placement, and optimization — while you focus on feeding it high-quality creative.

ASC works best when paired with a strong product catalogue, solid creative variety, and enough daily budget for the system to learn (typically $100+/day to start seeing meaningful data).

Retargeting With Intention

Retargeting isn't dead — but it's different than it was. With signal loss from iOS changes limiting how precisely you can retarget based on website behaviour, retargeting pools have shrunk and the data is less reliable.

What still works well: retargeting your email list and past purchasers (first-party data is king), video view retargeting (people who watched 50%+ of a video are warm), and engagement-based audiences.

What's worth pulling back from: site visitor retargeting with very short windows, which has become less accurate and more expensive.

What I've Stopped Recommending

Detailed Interest Targeting as a Primary Strategy

As I mentioned above: Meta's AI is better at finding your customer than you are at describing them with interest categories. Narrow interest targeting often restricts the algorithm unnecessarily and drives CPMs up.

I'm not saying never use it — I'm saying stop treating it as the foundation of your campaign strategy.

Optimizing for Link Clicks

If you're still running campaigns optimized for link clicks because it's cheaper than conversion campaigns — stop. You are paying to send curious people to your website, not buyers. The algorithm optimizes for what you tell it to optimize for. If you ask for clicks, you get clicks. You want purchases, customers, or at minimum, add-to-carts.

The only time I use link click optimization is for very top-of-funnel awareness plays where I genuinely don't care about conversion data and just want reach.

Changing Campaigns Too Quickly

This one is less about a format and more about behaviour. One of the most common mistakes I see is advertisers jumping in to make changes to campaigns before the algorithm has had enough time to learn.

Meta's ad system needs data to optimize. If you're resetting campaigns every 3-4 days because the ROAS isn't where you want it, you're working against yourself. Unless something is dramatically off, campaigns need at least 7 days and 50 optimization events to exit the learning phase. Patience is part of the strategy.

Over-Relying on Platform-Reported ROAS

Meta's reported ROAS is not the complete picture. It never was, but post-iOS 14, it's become even less reliable. Platforms measure what they can see — and with signal loss, they can't see everything.

Cross-reference your Meta-reported numbers with actual revenue in your store, first-party data, and post-purchase surveys. If Meta says you did $10K in attributed revenue but your Shopify dashboard shows a different story, the truth is probably somewhere in the middle.

What the Next 12 Months Look Like

A few things I'm watching closely:

AI-generated creative will become table stakes — but differentiation will be in the human layer. As more brands use AI tools to produce creative at scale, the volume of ads running on Meta will continue to rise. The winners will be brands that use AI for production efficiency while keeping the creative strategy and brand voice distinctly human.

First-party data becomes a real competitive moat. As signal loss continues to affect targeting accuracy, brands with large, engaged email lists and robust CRM data will have a significant advantage. Start building and cleaning your list now if you haven't.

Lead generation will see continued investment. With ecommerce margins under pressure, more brands are experimenting with lead gen as a way to move people into a lower-cost nurture sequence before converting. Expect more brand-to-DTC overlap in how companies think about their Meta strategy.

The creative testing flywheel separates winners from everyone else. The brands that will win on Meta in the next year are the ones running structured creative tests consistently — not one test a month, but ongoing, systematic creative experimentation. If you don't have a creative testing process, that's where I'd start.

The Bottom Line

Meta ads are still one of the highest-leverage paid channels available, especially for ecommerce and consumer brands. But the playbook has changed, and it keeps changing.

The advertisers who are winning right now are the ones who understand that Meta's AI is a tool to work with, not against — and who are investing in creative quality, first-party data, and strategic patience instead of trying to outsmart the algorithm with clever audience hacks.

Stop fighting the machine. Feed it what it needs, and focus your energy on the things it can't do for you: great creative, smart strategy, and a product worth selling.

Working through a Meta ads strategy and need a senior marketing perspective? Let's talk.

Capacity Canada

Brown Bagging For Calgary’s Kids

♦ BB4CK Board Member (Volunteer Position)

Executive – Calgary, Alberta (Hybrid)

Brown Bagging for Calgary’s Kids (BB4CK) is seeking passionate community leaders to join our Board of Directors. If meaningful action motivates you, If you want to make a real difference in the daily life of a child, If you are passionate about creating social change and making a positive impact on your community…
You may be exactly who we are looking for!

About Us

Brown Bagging for Calgary’s Kids (BB4CK) is a community–funded non-profit organization that has been working for 35 years to make sure kids across Calgary have access to the food they need.

Together with the BB4CK community — donors, volunteers, school staff, and partners — we care for kids and families year-round. We work across the food-insecurity spectrum by preparing and delivering lunches to children in schools and summer camps, offering families a dignified choice through grocery cards, and advocating for systemic change in Calgary’s emergency food sector.

Our vision: A future where communities ensure no kids go hungry.

Our mission: Connect and inspire people to take meaningful action to feed and care for kids.

Opportunity

Brown Bagging for Calgary’s Kids is currently seeking new members to join its Board of Directors. Each position will commence with a two-year term starting in Fall 2026. You will be joining a mature and highly cohesive board of committed volunteers who bring deep passion, strong governance practices, and a shared dedication to BB4CK’s mission.

In our commitment to building a diverse Board, we are specifically seeking candidates with Board governance expertise and leadership skills. In addition, expertise in human resources leadership, finance/audit, and/or risk management would be viewed as strengths.

Brown Bagging for Calgary’s Kids is dedicated to fostering a diverse, equitable, and inclusive environment. We are seeking applicants of all backgrounds, experiences, and perspectives, including individuals with lived experience of food insecurity, poverty, or systemic barriers.

Responsibilities

The Board Member shall:
Act in a governance capacity, respecting the boundaries in accountability between management and the board.

  • Ensure the long-term viability of the agency by comprehensively understanding and approving the strategic plan and annual operating budget, while continuously monitoring ongoing performance against both.
  • Become familiar with the food insecurity landscape in Calgary, and how BB4CK can best meet its mandate
  • Actively participate in Board and assigned committee meetings through regular attendance and appropriate preparation.
  • Demonstrate a willingness to serve as Chair of a Committee and/or Chair of the Board in future years.
  • Assist in communicating and promoting Brown Bagging for Calgary’s Kids (BB4CK) mission and programs to the community; play a central role in enhancing BB4CK’s reputation, advocating for its mission, and networking with the public and media.
  • Familiarize yourself with BB4CK’s operations and culture by participating in food preparation sessions and other program opportunities.
  • Attend special events.
  • Review and understand the bylaws, policies, and Board structure, and recommend changes as necessary.
  • Ensure BB4CK complies with all legal and regulatory requirements.
  • Support the Executive Director’s success and contribute to the annual performance evaluation of the Executive Director.
  • Support the recruitment and onboarding of new Board Members.
  • Demonstrate personal and professional values that align with those of BB4CK.
  • Assist in fostering and maintaining positive relationships among the Board, committees, staff members and the community to enhance BB4CK’s performance in achieving its mission.
Meetings and Time Commitment

Board members participate in quarterly Board meetings, with the occasional ad hoc meeting between scheduled sessions. Meetings are typically two hours and take place in person (where possible) in the late afternoon or evening.

  • Board members participate in two standing committees of the Board and may also serve on ad hoc committees. Committee meetings are held quarterly, typically run for 90 minutes and take place virtually weekdays at lunch, or in the late afternoon/early evening.
  • Board members are encouraged to attend the occasional special event throughout the year.
  • Typical time commitment is approximately 25 hours per quarter (~8 hours per month).
  • Our board terms extend up to six years, and we value directors who are ready to engage with BB4CK’s mission over that horizon. While we understand that circumstances can change, we’re seeking members who anticipate being able to contribute throughout that period.
Duty and Standard of Care

Every Board Member, in exercising their powers and fulfilling their duties, shall:

  • Act honestly and in good faith with a focus on the best interests of the Society.
  • Exercise the care, diligence, and skill that a reasonable and prudent person would demonstrate under comparable circumstances.
  • Adhere to the Society’s Code of Conduct, ensuring all decisions are made in the best interests of the organization while avoiding any personal conflicts of interest.
Remuneration

As a volunteer role, service on the Society’s Board is provided without remuneration, except for reimbursement of administrative support, travel, and accommodation expenses related to the fulfillment of Board duties.

Interested candidates are kindly invited to submit their applications by the end of day on June 1, 2026.

The post Brown Bagging For Calgary’s Kids appeared first on Capacity Canada.


Catherine Fife MPP

MPP Fife renews push for better handling of sexual assault cases in wake of Sloka trial

From the Waterloo Region Record, April 29, 2026: Waterloo MPP Catherine Fife is reintroducing her private member’s bill calling for more accountability and transparency in the handling of sexual assault cases.


Capacity Canada

Psoriasis Canada

2026 – PsoCan Volunteer Board Director Posting Location: Canada (remote with some potential opportunity for travel) Application Deadline: Tuesday May 19, 2026, 5pm ET. About Us

Psoriasis Canada is a national not-for-profit organization dedicated to improving the lives of people in Canada who live with psoriatic disease. As Canada’s trusted experts on psoriatic disease, we offer community, resources, and hope for a better future for those affected and those who care for them.

Opportunity

We are currently seeking a passionate and committed individual to fill a volunteer position on our Board of Directors for a term of three years, concluding in June 2029.

Role and Responsibilities

The Board of Directors is responsible for the overall governance and strategic direction of Psoriasis Canada. This is an unpaid volunteer position. Organization staff are responsible for all operations of the organization. The successful candidate will be asked to sign a volunteer agreement, confidentiality agreement, and conflict of interest form.

Responsibilities of the Directors include: Leadership, Governance & Oversight
  • Directors help promote the mission and vision of the organization and advocate for Psoriasis Canada.
  • Directors provide guidance and oversight for organizational risk management.
  • The Board establishes governance structures to facilitate the performance of the Board’s role and enhance individual director performance. It ensures compliance with legal requirements as outlined in the Canadian Not-for-Profit Corporations Act and periodically reviews its policies and structure accordingly.
Strategic Planning
  • The Board will advance Psoriasis Canada’s mission, vision and values by overseeing its strategic plan and ensuring that operational plans are consistent with the strategic plan.
Resource Oversight
  • Fiduciary duties include financial stewardship of resources such as ensuring availability of and overseeing allocation of financial resources; ensuring that appropriate financial policies are in place for Psoriasis Canada; approving the annual budget and monitoring regularly.
Qualifications

Directorship is an opportunity for an individual who is passionate about Psoriasis Canada’s mission and who has a demonstrated commitment and the professional experience to serve in the role of Board leadership. A Director must:

  • Be 18 years of age or older
  • Be impacted by psoriasis and/or psoriatic arthritis, whether as a patient or caregiver / family member, or health care professional serving this community
  • Be committed to serving the needs of people impacted by psoriatic disease and their families and furthering the organization’s mission
  • Be able to communicate in English
  • Reside in Canada

Additional assets include:

  • Experience in healthcare, law, finance, fundraising, or non-profit governance.
  • Knowledge of the health care system in Canada and awareness of the health policy environment.
  • Ability to communicate in French or languages other than English.

Psoriasis Canada aims to reflect the communities that we serve so we welcome candidates with varied backgrounds and experiences to apply. All qualified applicants will receive consideration without regard to race, colour, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, or age.

Term

This appointment will begin on June 16, 2026, and continue until June 2029. The appointed director will be eligible to stand for election by the Members at the board election cycle corresponding to the expiry of this term. If elected, they may serve one additional three-year term in accordance with the organization’s bylaws.

Time and Commitment

A Director is expected to commit to the meeting times (scheduled in advance) as per the board attendance policy as well as to any other project commitment he/she/they volunteer to take on. This amounts to approximately 4–6 hours per month.

How to Apply

To apply, please send the following information by email to Christian Boisvert-Huneault, Psoriasis Canada’s Acting Chair at vicechair@psoriasiscanada.ca by Tuesday May 19, 2026 5pm ET.

  • Your résumé, including contact information (email and phone number) where you can be reached
  • A letter of interest which includes a summary of your experience with and/or interest in our organization and the skills and knowledge are you willing to bring to our board
  • Two professional references

Please note that applicants selected for further consideration will be contacted for an interview to be conducted between May 19 and May 22, 2026.

We appreciate the interest of all applicants; however, to manage capacity, only those selected for further consideration will be contacted.

The post Psoriasis Canada appeared first on Capacity Canada.


Code Like a Girl

Rendering Is a Browser Decision, Not a JavaScript One

This is the fifth article in a series on how JavaScript actually runs. You can read the full series here or on my website.

You change the DOM.

You expect the screen to update.

It doesn’t.

Why?

In the earlier articles, we established three constraints:

  1. JavaScript runs to completion.
  2. Tasks form scheduling boundaries.
  3. Microtasks must fully drain before moving on.

Now we add a fourth:

The browser will not render while a macrotask is running nor while microtasks are draining.
Rendering Is a Browser Decision

Up to this point in the series, we’ve focused on two pieces of the system:

  1. The JavaScript engine, which executes code and manages the call stack.
  2. The runtime, which provides the event loop and scheduling rules.

But neither of these is responsible for rendering.

Beyond the JavaScript engine and the runtime, the browser also contains a rendering engine — the subsystem responsible for layout and painting.

The engine executes your code. The runtime manages when that code runs. The rendering engine decides when the result becomes visible.

For simplicity, this article will refer to that rendering engine simply as the browser.

The Rendering Misconception

When I first started learning JavaScript, I carried several mental models that felt reasonable:

  • DOM updates render immediately
  • If I change the UI, the user will see it right away.
  • The browser renders continuously at 60fps.

These felt natural because the screen often updates quickly. But they’re incomplete: Rendering does not happen whenever the DOM changes. Instead, rendering happens only when there is a ‘safe opportunity’, after the current macrotask finishes and the microtask queue is empty.

Rendering is not triggered by DOM mutation. It is gated by scheduling boundaries. Let’s test that.

Running the Experiments

These experiments rely on the browser’s rendering behaviour.

  1. Create a simple HTML file with the following content:
<div id="box">Initial</div>

2. Open the file in your browser

3. You can run all code snippets in this series by pasting them into the browser console.

These examples will not work in Node.js because they depend on the DOM and browser rendering.

Test 1: DOM Updates Inside One Macrotask

What happens when we have multiple DOM updates within the same macrotask? We may write something like the following, using a placeholder before the final string is ready:

const box = document.getElementById("box");

box.textContent = "Temporary string";

for (let i = 0; i < 1e9; i++) {}

box.textContent = "Final string of Test 1";

We might worry that "Temporary string" would briefly appear before "Final string" is ready. But that doesn't happen. Phew!

Both updates occur inside the same macrotask and the browser refuses to render mid-task. It waits until the entire macrotask is finished, checks that there is no microtask in the queue and finally considers rendering.

The intermediate DOM states never show.

Test 2: Microtasks Also Delay Rendering

What if the second update happens in a microtask instead? Would "Temporary string" appear briefly?

const box = document.getElementById("box");

box.textContent = "Temporary string";

Promise.resolve().then(() => {
box.textContent = "Final string of Test 2";
});

Again, we only see "Final string of Test 2".

The initial macrotask runs and sets "Temporary string". After the call stack is empty, the microtask runs immediately after to update the DOM to "Final string". Only now does the browser get an opportunity to render.

Microtasks delay rendering just like synchronous code does.

Test 3: Breaking Into a New Task Allows Paint

Now consider a timer callback:

const box = document.getElementById("box");

box.textContent = "Temporary string";

setTimeout(() => {
box.textContent = "Final string of Test 3";
}, 1000);

This time we may see "Temporary string", followed by "Final string of Test 3" a second later.

Unlike the previous tests, we have now introduced a task boundary. The browser finishes the initial macrotask, drains microtasks (there are none here) and gets an opportunity to render. If it chooses to render, "Temporary string" becomes visible.

Later, when the runtime schedules the timer’s macrotask, the DOM updates to "Final string" and the next render will reflect this.

Rendering is allowed at task boundaries. This does not mean that rendering is guaranteed between macrotasks; only that it can only happen there.

Why Rendering Waits

If the browser could render in the middle of a macrotask or in the middle of microtask draining, it could display half-updated DOM, inconsistent layout and/or partially computed state.

Thankfully, with this constraint, the browser renders only stable states, where a macrotask has finished and the microtask queue is empty. There would be no partial work in progress and hence rendering is atomic with respect to JavaScript execution.

The Correct Mental Model

With these tests, we’ve shown that the browser does not render whenever the DOM changes. Instead:

The browser renders only after JavaScript finishes its turn.

Here, a “turn” means the current macrotask completes and the microtask queue has been fully drained.

Rendering is allowed only at those boundaries. This does not mean the browser renders after every turn, only that it cannot render during one. The rendering decision is gated by the same scheduling rules we’ve been building throughout this series.

What This Prepares Us For Next

If rendering only happens at specific boundaries, a new question emerges: How do we write code that runs at the right moment?

setTimeout creates a new macrotask but it does not align with the browser's frame timing. Microtasks delay rendering but they do not schedule it. If we want smooth animation and responsive updates, we need a way to run code just before the browser renders the next frame.

This is what requestAnimationFrame is design for. In the next article, we'll look more closely at how the browser's rendering cycle works and how to schedule work in harmony with it.

This article was originally published on my website.

Rendering Is a Browser Decision, Not a JavaScript One was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


Code Like a Girl

Taylor Swift had to trademark her own voice, saying her own name.

Welcome to AI. Where you have to legally prove you are you.

Continue reading on Code Like A Girl »


Code Like a Girl

Your User Stories Passed the AI Review… But They Still Broke the System

(Part 2 of: When Your AI Team Starts Challenging Your User Stories)

Continue reading on Code Like A Girl »


Code Like a Girl

How to Debug and Fix Queue Overflow in Distributed Systems?

Leveling Up System Design: #1 — A step-by-step guide to finding bottlenecks and fixing slow consumers in production

Continue reading on Code Like A Girl »


KW Predatory Volley Ball

Congratulations Jonah Dolhun. University of Toronto Commit.

Read full story for latest details.

Tag(s): Home