<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[blog.maxcomperatore.com]]></title><description><![CDATA[This is not a tech blog. This is a war journal from the front lines of the new economy.
I deconstruct the timeless, first-principle laws of leverage, wealth, an]]></description><link>https://blog.maxcomperatore.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 18:46:45 GMT</lastBuildDate><atom:link href="https://blog.maxcomperatore.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The State of Argentina's Remote Tech Market (2026)]]></title><description><![CDATA[Everyone in the local tech scene talks about the "Senior USD" dream but very few people actually look at the underlying distribution of the data. I took the raw reports from Goncy's Salancy project an]]></description><link>https://blog.maxcomperatore.com/the-state-of-argentina-s-remote-tech-market-2026</link><guid isPermaLink="true">https://blog.maxcomperatore.com/the-state-of-argentina-s-remote-tech-market-2026</guid><dc:creator><![CDATA[Deactivated User]]></dc:creator><pubDate>Tue, 10 Mar 2026 14:49:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/66a821c234e4a5b860c69e61/a94854cd-e161-44df-b212-1968b26c51c1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Everyone in the local tech scene talks about the "Senior USD" dream but very few people actually look at the underlying distribution of the data. I took the raw reports from Goncy's Salancy project and ran them through a full visualization pipeline to see where the money is actually moving.</p>
<p>We are looking at 907 unique reports. This isn't just noise. It is a clear map of how companies are paying right now.</p>
<p>Source: <a href="https://salarios.gonzalopozzo.com/">https://salarios.gonzalopozzo.com/</a></p>
<img src="https://cdn.hashnode.com/uploads/covers/66a821c234e4a5b860c69e61/89a1ffb3-8a05-4d39-9ec6-1e87070cbacb.png" alt="" style="display:block;margin:0 auto" />

<p>The first thing you notice in the Market Landscape bubble chart is the sheer volume of Backend and Fullstack reports. These are the engines of the industry. The bubbles represent report volume. You can see that once you cross the $5k USD threshold the bubbles start to shrink. The air gets thin up there.</p>
<img src="https://cdn.hashnode.com/uploads/covers/66a821c234e4a5b860c69e61/728221e9-d9a7-4a5c-9ecb-f68f0f87f6c8.png" alt="" style="display:block;margin:0 auto" />

<p>The Career Staircase is where things get interesting. Look at the slope of the lines. Almost everyone starts in a tight cluster between \(1,000 and \)2,000 as a Trainee or Junior. But as you hit Senior the variance explodes. The gap between a Senior Solutions Architect and a Senior QA Manual is massive.</p>
<img src="https://cdn.hashnode.com/uploads/covers/66a821c234e4a5b860c69e61/f46bad35-bb8b-4d62-b965-d17e83927dc1.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/66a821c234e4a5b860c69e61/9c0c5737-4b6f-4255-a93b-be92354c536d.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/66a821c234e4a5b860c69e61/8b191f8c-f509-4abd-a830-b5b20e60e90c.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/66a821c234e4a5b860c69e61/dd9faf2f-e94b-4c2d-a05f-076fef09b499.png" alt="" style="display:block;margin:0 auto" />

<p>This is even clearer in the Income Boundaries step chart. The "Market Ceiling" (the red line) jumps aggressively at every level. While the "Market Floor" (the green line) stays relatively flat. This means that seniority doesn't guarantee a higher floor but it gives you a much higher roof.</p>
<img src="https://cdn.hashnode.com/uploads/covers/66a821c234e4a5b860c69e61/47c12117-bf02-483a-942e-f7c5677976bc.png" alt="" style="display:block;margin:0 auto" />

<p>The Topography Map shows where the market is most saturated. The dark core is your "Standard Argentinian USD Salary." It lives between \(2k and \)3.5k. That is the safe zone.</p>
<p>The Seniority Premium</p>
<img src="https://cdn.hashnode.com/uploads/covers/66a821c234e4a5b860c69e61/002faa42-8003-4579-88f9-b5233c221cba.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/66a821c234e4a5b860c69e61/11786437-fe24-4955-9e3e-e58c008975d8.png" alt="" style="display:block;margin:0 auto" />

<p>If you want to know which career path has the most "upside" look at the Income Gap dumbbell plot. A long bar means a role where seniority pays off exponentially. Solutions Architects and Design Engineers have the longest bars. They are playing a different game than the rest of the market.</p>
<img src="https://cdn.hashnode.com/uploads/covers/66a821c234e4a5b860c69e61/ea8f7de1-59f4-45fb-9019-5ed7608bf021.png" alt="" style="display:block;margin:0 auto" />

<p>The Salary Waterfall shows the transition. We see a massive stack of Juniors and Semi-Seniors at the low end and then the Seniors "breaking away" into the high compensation brackets.</p>
<img src="https://cdn.hashnode.com/uploads/covers/66a821c234e4a5b860c69e61/44679297-2625-424a-8f94-1609d56e8669.png" alt="" style="display:block;margin:0 auto" />

<p>Total Market Weighted Average: \(4,126 USD<br />High Volume Weighted Average (&gt;10 reports): \)4,003 USD</p>
<p>The market is healthy but it is skewed. If you want to break the $6k barrier you need to stop being a generalist and start moving into those niche high ceiling roles shown in the top of our dumbbell chart.</p>
<img src="https://cdn.hashnode.com/uploads/covers/66a821c234e4a5b860c69e61/853d4868-5482-4426-8c08-36526575fb3e.png" alt="" style="display:block;margin:0 auto" />

<p>Analysis performed on 1/1/26 data.</p>
]]></content:encoded></item><item><title><![CDATA[Why AI is Killing Your Focus (And the System to Fix It)]]></title><description><![CDATA[We are living in an age of miracles.
With a few lines of natural language, we can summon code, generate art, and architect systems that would have taken a team of specialists weeks to build just a few years ago. The friction that once defined the act...]]></description><link>https://blog.maxcomperatore.com/why-ai-is-killing-your-focus-and-the-system-to-fix-it</link><guid isPermaLink="true">https://blog.maxcomperatore.com/why-ai-is-killing-your-focus-and-the-system-to-fix-it</guid><category><![CDATA[AI]]></category><category><![CDATA[focus]]></category><category><![CDATA[SaaS]]></category><category><![CDATA[software development]]></category><category><![CDATA[Python]]></category><category><![CDATA[Rust]]></category><category><![CDATA[Energy]]></category><category><![CDATA[challenge]]></category><category><![CDATA[Twitter]]></category><category><![CDATA[Xcode]]></category><dc:creator><![CDATA[Deactivated User]]></dc:creator><pubDate>Sat, 13 Sep 2025 21:57:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/KDK0MzCj4wY/upload/63d97fb859f3466a7b95ed5df277ffd9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We are living in an age of miracles.</p>
<p>With a few lines of natural language, we can summon code, generate art, and architect systems that would have taken a team of specialists weeks to build just a few years ago. The friction that once defined the act of creation—the blank page, the setup hell, the Yak Shaving—has been almost entirely annihilated.</p>
<p>We have been given an infinite power engine. And for many of us, it is killing our ability to finish anything.</p>
<p>If you have ever felt this, you are not alone:</p>
<blockquote>
<p><em>“It’s so easy to code now with AI… but I work on a project for 30 minutes, get a quick dopamine hit, and then I ditch it for the next shiny idea.”</em></p>
</blockquote>
<p>This feeling is not a personal failure. It is not laziness. It is a predictable psychological response to a new reality. You are experiencing <strong>Context Collapse.</strong> Your brain, flooded with infinite possibility and frictionless creation, has lost its ability to commit.</p>
<p>To win in this new era, we do not need more willpower. We need a <strong>new operating system.</strong> We need a set of AI-aware workflows designed not just to start, but to <strong>ship.</strong></p>
<h3 id="heading-the-anatomy-of-the-abyss-deconstructing-the-why">The Anatomy of the Abyss: Deconstructing the "Why"</h3>
<p>Before we can build the system, we must understand the forces working against us. This is not a simple problem. It is a multi-layered psychological trap.</p>
<ol>
<li><p><strong>The Tyranny of Possibility:</strong> When you can build <em>anything</em> in thirty minutes, the psychological weight of choosing to commit to <em>one thing</em> becomes immense. Every project you choose is a thousand other fascinating projects you are choosing <em>not</em> to do.</p>
</li>
<li><p><strong>The Death of Sunk Cost:</strong> The brutal, tedious, multi-hour process of setting up a new development environment used to be a powerful psychological anchor. That "sunk cost" of effort made you more likely to see a project through. AI has eliminated this, and with it, a critical mechanism for commitment.</p>
</li>
<li><p><strong>The Dopamine Casino:</strong> The AI-native workflow is a perfectly tuned dopamine slot machine. Idea -&gt; Prompt -&gt; "Wow, I made a thing!" -&gt; Post on X -&gt; Get likes. Your brain registers this as a completed mission loop. The hard, "boring" work of iterating, debugging, and refining a project offers a far lower and slower dopamine return than simply starting the next new, exciting thing.</p>
</li>
<li><p><strong>The Illusion of Synthetic Progress:</strong> AI is a master of building the first 80% of a project. It can generate the surface-level code with incredible speed. But the final 20%—the deep integration, the nuanced bug fixes, the polishing—is where the real work and the real value lies. And that work feels brutally slow compared to the initial, AI-powered sprint.</p>
</li>
</ol>
<h3 id="heading-the-system-an-anti-ditch-protocol-for-the-ai-native-builder">The System: An "Anti-Ditch" Protocol for the AI-Native Builder</h3>
<p>We will not fight this with brute force. We will fight this with a smarter system. This protocol is designed to re-introduce the healthy friction and psychological anchors that AI has removed.</p>
<p><strong>1. The 3-Day Rule: Make <em>Choosing</em> Hard.</strong><br />AI makes starting easy. Your job is to make <em>choosing</em> what to start the hardest part of your process. When an idea strikes, you are forbidden from opening your code editor. You will write it down in a simple text file. Then, you walk away. If, after three days, the idea is still burning a hole in your mind, if it keeps coming back to you in the shower, then, and only then, are you allowed to begin.</p>
<p><strong>2. Define the Minimum Viable <em>Outcome</em> (MVO).</strong><br />Before you write a single line of code, you must define the victory condition. An MVP (Minimum Viable Product) is too vague. You need an <strong>MVO (Minimum Viable Outcome).</strong> This is the smallest, concrete result that would make you feel the project was a success.</p>
<ul>
<li><p><em>Bad MVO:</em> "Build a cool app."</p>
</li>
<li><p><em>Good MVO:</em> "Get 10 non-friends to sign up for the waitlist."</p>
</li>
<li><p><em>Good MVO:</em> "Generate my first $5 in revenue."</p>
</li>
<li><p><em>Good MVO:</em> "Get one retweet from a builder I admire."<br />  You must attach your ego and your dopamine reward to this <strong>outcome,</strong> not to the act of prompting inside Cursor.</p>
</li>
</ul>
<p><strong>3. Pre-Commit Publicly.</strong><br />Willpower is a finite resource. Social pressure is an infinitely renewable one. Before you begin, you will make a small, public commitment.</p>
<ul>
<li><p>Tweet: "Dedicating the next 7 days to building [X]. My MVO is to [Y]. I will share the results, success or failure, next Friday."</p>
</li>
<li><p>Tell a friend: "I need you to be my accountability partner. I will send you a link to the finished project by EOD Friday. If I don't, call me out."<br />  This simple act creates a psychological "point of no return."</p>
</li>
</ul>
<p><strong>4. Engineer Cognitive Friction.</strong><br />Do not let the AI do everything. You must get your hands dirty. The dirt is what creates the feeling of <strong>ownership.</strong></p>
<ul>
<li><p>Let the AI generate the boilerplate. Then, <strong>delete 30% of it</strong> and rewrite it in your own style.</p>
</li>
<li><p>Let the AI explain a complex architecture. Then, <strong>draw it by hand</strong> on a piece of paper.</p>
</li>
<li><p>Let the AI write the tests. Then, <strong>intentionally break them</strong> and fix them yourself.<br />  You need to feel the burn. The friction is what forges the commitment.</p>
</li>
</ul>
<p><strong>5. The "Ditch Log": Your Personal Debugger.</strong><br />Every time you abandon a project, you will perform a five-minute post-mortem. Create a simple markdown file, <a target="_blank" href="http://ditch-log.md">ditch-log.md</a>. For each ditched project, you will write:</p>
<ul>
<li><p><strong>What was it?</strong></p>
</li>
<li><p><strong>Why did I ditch it?</strong> (Be brutally honest. "I got bored." "A new, shinier idea came along.")</p>
</li>
<li><p><strong>What did I learn?</strong></p>
</li>
<li><p><strong>Would I ever come back to this?</strong><br />  Over time, patterns will emerge. You are not just building projects; you are <strong>debugging your own focus.</strong></p>
</li>
</ul>
<h3 id="heading-the-final-reframe">The Final Reframe</h3>
<p>AI did not kill your focus. It <strong>revealed a weakness</strong> in your personal operating system.</p>
<p>The builders who win in this new era will not be the ones who can prompt the fastest. They will be the ones who have architected the most robust internal systems for <strong>choosing wisely, committing publicly, and shipping relentlessly.</strong></p>
<p>You have been given the most powerful tools in human history.</p>
<p>Now, you must build the discipline to wield them.</p>
<p>— Max</p>
]]></content:encoded></item><item><title><![CDATA[Practical Examples of C++ Concurrency]]></title><description><![CDATA[For more insights and resources on programming and game development, visit my website at maxcomperatore.com.
std::mutex:
A std::mutex is like a bathroom door with a lock. When a thread tries to access a resource protected by the mutex, it locks the d...]]></description><link>https://blog.maxcomperatore.com/practical-examples-of-c-concurrency</link><guid isPermaLink="true">https://blog.maxcomperatore.com/practical-examples-of-c-concurrency</guid><category><![CDATA[#cpp #guide]]></category><category><![CDATA[concurrency]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[programming languages]]></category><category><![CDATA[Tutorial]]></category><category><![CDATA[Computer Science]]></category><category><![CDATA[problem solving skills]]></category><category><![CDATA[parallelism]]></category><dc:creator><![CDATA[Deactivated User]]></dc:creator><pubDate>Fri, 23 Aug 2024 03:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/tMbQpdguDVQ/upload/f00fcb219bfde955445ee955f24ab0ca.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For more insights and resources on programming and game development, visit my website at <a target="_blank" href="http://maxcomperatore.com">maxcomperatore.com</a>.</p>
<h2 id="heading-stdmutexhttpsmaxcomperatorecom"><a target="_blank" href="https://maxcomperatore.com">std::mutex:</a></h2>
<p>A <code>std::mutex</code> is like a bathroom door with a lock. When a thread tries to access a resource protected by the mutex, it locks the door behind it. This prevents other threads from entering until the current occupant finishes and unlocks the door. When a thread finishes using the shared resource, it unlocks the mutex, allowing other threads to access the resource.</p>
<p><img src="https://blogger.googleusercontent.com/img/a/AVvXsEjQSDj-u_TdzEfJ_hK-mffA-ruYeYRHyK2BYWwN7W0BmgibK2Xn8hNPyYwaBlm3FRLu3UYJD-PyEeeSCuW8G0QTS2DrXO0OFXibd-m-vKvUeU8w3_LEexn8YN0TZOQalZxg3PJVnZgmA4tcCt5IRJmDS8Rfi9_5kHCSYorfh3AgAOXNb4eKcuFs6j7YsOI=rw" alt="Diagram showing two threads (green and pink) and a shared resource managed through a mutex. Threads wait and release the mutex to access the shared resource." /></p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;thread&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;mutex&gt;</span></span>

<span class="hljs-built_in">std</span>::mutex mtx;

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">bathroom</span><span class="hljs-params">()</span></span>{
  <span class="hljs-comment">// One enters the bathroom, the other waits. Like 2 friends rushing towards a bathrrom</span>
  <span class="hljs-function"><span class="hljs-built_in">std</span>::lock_guard&lt;<span class="hljs-built_in">std</span>::mutex&gt; <span class="hljs-title">lock</span><span class="hljs-params">(mtx)</span></span>;
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Using the bathroom"</span> &lt;&lt; <span class="hljs-built_in">std</span>::<span class="hljs-built_in">endl</span>;
  <span class="hljs-comment">// Doing things...</span>
  <span class="hljs-built_in">std</span>::this_thread::sleep_for(<span class="hljs-built_in">std</span>::chrono::seconds(<span class="hljs-number">2</span>));
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Leaving the bathroom"</span> &lt;&lt; <span class="hljs-built_in">std</span>::<span class="hljs-built_in">endl</span>;
  <span class="hljs-comment">// The other one can enter now</span>
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span>{
  <span class="hljs-function"><span class="hljs-built_in">std</span>::thread <span class="hljs-title">t1</span><span class="hljs-params">(bathroom)</span></span>;
  <span class="hljs-function"><span class="hljs-built_in">std</span>::thread <span class="hljs-title">t2</span><span class="hljs-params">(bathroom)</span></span>;
  t1.join();
  t2.join();
  <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<h2 id="heading-stdtimedmutex">std::timed_mutex</h2>
<p><code>std::timed_mutex</code> is like a bathroom where you can wait for a specific time before giving up. If the mutex is not acquired within the given time, the thread abandons the attempt and continues.</p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;thread&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;mutex&gt;</span></span>

<span class="hljs-built_in">std</span>::timed_mutex mtx;

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">bathroom</span><span class="hljs-params">()</span></span>{
  <span class="hljs-comment">// the second thread will wait for 3 seconds to use the bathroom, otherwise it will look for another bathroom.</span>
  <span class="hljs-keyword">if</span>(mtx.try_lock_for(<span class="hljs-built_in">std</span>::chrono::seconds(<span class="hljs-number">3</span>))){
    <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Using bathroom"</span> &lt;&lt; <span class="hljs-built_in">std</span>::<span class="hljs-built_in">endl</span>;
    <span class="hljs-comment">// thread using the bathroom is less time than the other thread is willing to wait,</span>
    <span class="hljs-comment">// so both threads will be able to go to the toilet, one after the other</span>
    <span class="hljs-built_in">std</span>::this_thread::sleep_for(<span class="hljs-built_in">std</span>::chrono::seconds(<span class="hljs-number">2</span>));
    <span class="hljs-comment">// leaving toilet</span>
    mtx.unlock();
  }<span class="hljs-keyword">else</span>{
    <span class="hljs-comment">// if no else, thread dies. like dying waiting for bathroom, very common</span>
    <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Bathroom is occupied, imma look for other bathroom because i waited too long (3 seconds)"</span> &lt;&lt; <span class="hljs-built_in">std</span>::<span class="hljs-built_in">endl</span>;
  }
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span>{
  <span class="hljs-function"><span class="hljs-built_in">std</span>::thread <span class="hljs-title">t1</span><span class="hljs-params">(bathroom)</span></span>;
  <span class="hljs-function"><span class="hljs-built_in">std</span>::thread <span class="hljs-title">t2</span><span class="hljs-params">(bathroom)</span></span>;
  t1.join();
  t2.join();
  <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<h2 id="heading-stdrecursivemutex">std::recursive_mutex</h2>
<p><code>std::recursive_mutex</code> is like a bathroom with multiple stalls, allowing a thread to acquire the mutex multiple times. This is useful for recursive functions. Ensure that you unlock as many times as you lock to avoid deadlocks.</p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;thread&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;mutex&gt;</span></span>

<span class="hljs-keyword">const</span> <span class="hljs-keyword">int</span> NUM_STALLS = <span class="hljs-number">3</span>;
<span class="hljs-built_in">std</span>::recursive_mutex bathroom_mutex;
<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">enterBathroom</span><span class="hljs-params">(<span class="hljs-keyword">int</span> personId)</span> </span>{  <span class="hljs-function"><span class="hljs-built_in">std</span>::lock_guard&lt;<span class="hljs-built_in">std</span>::recursive_mutex&gt; <span class="hljs-title">guard</span><span class="hljs-params">(bathroom_mutex)</span></span>;  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Person "</span> &lt;&lt; personId &lt;&lt; <span class="hljs-string">" enters the bathroom."</span>;
  <span class="hljs-comment">// Simulate using multiple stalls by acquiring the mutex multiple times. Other threads will be blocked until the mutex is released.  for (int i = 0; i &lt; 2; ++i) {    // Acquire the mutex for each stall used, currently the same thread can acquire the mutex multiple times.    // locking again.    bathroom_mutex.lock();    std::cout &lt;&lt; "Person " &lt;&lt; personId &lt;&lt; " is using stall " &lt;&lt; i+1 &lt;&lt; ".";  }</span>
  <span class="hljs-comment">// Simulate using the stalls for some time  std::this_thread::sleep_for(std::chrono::seconds(3));</span>
  <span class="hljs-comment">// Release the mutex for each stall used  for (int i = 0; i &lt; 2; ++i) {    // we have to unlock for each stale or the other threads will be blocked because of this stale mutex.    bathroom_mutex.unlock();  }</span>
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Person "</span> &lt;&lt; personId &lt;&lt; <span class="hljs-string">" exits the bathroom."</span>;} <span class="hljs-comment">// The lock_guard is destroyed here, releasing the mutex.</span>
<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{  <span class="hljs-comment">// Create threads representing people entering the bathroom  std::thread people[NUM_STALLS];  for (int i = 0; i &lt; NUM_STALLS; ++i) {    people[i] = std::thread(enterBathroom, i + 1);  }</span>
  <span class="hljs-comment">// Join threads to wait for them to finish  for (int i = 0; i &lt; NUM_STALLS; ++i) {    people[i].join();  }</span>
  <span class="hljs-comment">// Output:  //Person 1 enters the bathroom.  //Person 1 is using stall 1.  //Person 1 is using stall 2.  //Person 1 exits the bathroom.  //Person 3 enters the bathroom. *1 left*  //Person 3 is using stall 1.  //Person 3 is using stall 2.  //Person 3 exits the bathroom.  //Person 2 enters the bathroom. *3 left*  //Person 2 is using stall 1.  //Person 2 is using stall 2.  //Person 2 exits the bathroom.</span>
  <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;}
</code></pre>
<h2 id="heading-stdrecursivetimedmutex">std::recursive_timed_mutex:</h2>
<p><code>std::recursive_timed_mutex</code> combines the features of <code>std::recursive_mutex</code> with a timeout mechanism. It allows recursive locking but will abandon the attempt if the lock is not acquired within the specified time.</p>
<h2 id="heading-stdlatch">std::latch:</h2>
<p><code>std::latch</code> acts as a synchronization barrier that waits for a specified number of threads to reach a certain point before proceeding.</p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;thread&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;latch&gt;</span></span>

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">participant</span><span class="hljs-params">(<span class="hljs-keyword">const</span> <span class="hljs-keyword">int</span> id)</span> </span>{
  <span class="hljs-comment">// We need three participants to start the race (to unlock the latch)</span>
  <span class="hljs-function"><span class="hljs-built_in">std</span>::latch <span class="hljs-title">startingLine</span><span class="hljs-params">(<span class="hljs-number">3</span>)</span></span>;
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Participant "</span> &lt;&lt; id &lt;&lt; <span class="hljs-string">" has arrived at the starting line.\n"</span>;
  <span class="hljs-comment">// Decrement the count of the latch</span>
  startingLine.count_down();
  <span class="hljs-comment">// Wait until all participants have arrived (until the count reaches 0)</span>
  startingLine.wait();
  <span class="hljs-comment">// This is equivalent to the two lines above but combined</span>
  <span class="hljs-comment">// startingLine.arrive_and_wait();</span>
  <span class="hljs-comment">// All participants start at the same time thanks to the latch.</span>
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Participant "</span> &lt;&lt; id &lt;&lt; <span class="hljs-string">" starts the race!\n"</span>;
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-function"><span class="hljs-built_in">std</span>::jthread <span class="hljs-title">t1</span><span class="hljs-params">(::participant, <span class="hljs-number">1</span>)</span></span>;
  <span class="hljs-function"><span class="hljs-built_in">std</span>::jthread <span class="hljs-title">t2</span><span class="hljs-params">(::participant, <span class="hljs-number">2</span>)</span></span>;
  <span class="hljs-function"><span class="hljs-built_in">std</span>::jthread <span class="hljs-title">t3</span><span class="hljs-params">(::participant, <span class="hljs-number">3</span>)</span></span>;

  <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<p>All threads will wait here until EVERY thread is in the latch, it's like a checkpoint that ensures everyone is at the same place!</p>
<h2 id="heading-stdbarrier">std::barrier</h2>
<p><code>std::barrier</code> is a reusable synchronization primitive that allows multiple threads to synchronize at specific points in their execution. Unlike <code>std::latch</code>, a <code>std::barrier</code> can be reset and reused.</p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;thread&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;barrier&gt;</span></span>

<span class="hljs-keyword">const</span> <span class="hljs-keyword">int</span> NUM_THREADS = <span class="hljs-number">3</span>;
<span class="hljs-function"><span class="hljs-built_in">std</span>::barrier <span class="hljs-title">barrier</span><span class="hljs-params">(::NUM_THREADS)</span></span>;

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">worker</span><span class="hljs-params">(<span class="hljs-keyword">const</span> <span class="hljs-keyword">int</span> id)</span> </span>{
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Worker "</span> &lt;&lt; id &lt;&lt; <span class="hljs-string">" started\n"</span>;
  <span class="hljs-built_in">std</span>::this_thread::sleep_for(<span class="hljs-built_in">std</span>::chrono::seconds(id));
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Worker "</span> &lt;&lt; id &lt;&lt; <span class="hljs-string">" finished work and waiting at the barrier\n"</span>;
  <span class="hljs-comment">// Wait at the barrier</span>
  ::barrier.arrive_and_wait();
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Worker "</span> &lt;&lt; id &lt;&lt; <span class="hljs-string">" passed the barrier and continued\n"</span>;

    <span class="hljs-comment">// another barrier spawns!!!!!!!!!!!!! (we reuse the previous one)</span>
    ::barrier.reset(); <span class="hljs-comment">// 0 -&gt; 3</span>
    <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Barrier reset for reuse\n"</span>;

    <span class="hljs-comment">// Wait at the barrier</span>
    ::barrier.arrive_and_wait();

    <span class="hljs-comment">// ...</span>
  }

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-function"><span class="hljs-built_in">std</span>::jthread <span class="hljs-title">t1</span><span class="hljs-params">(::worker, <span class="hljs-number">1</span>)</span></span>;
  <span class="hljs-function"><span class="hljs-built_in">std</span>::jthread <span class="hljs-title">t2</span><span class="hljs-params">(::worker, <span class="hljs-number">2</span>)</span></span>;
  <span class="hljs-function"><span class="hljs-built_in">std</span>::jthread <span class="hljs-title">t3</span><span class="hljs-params">(::worker, <span class="hljs-number">3</span>)</span></span>;

  <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<p>Barriers make all threads wait until the capacity of  the barrier is met, then the barrier open, <strong>but we can use it later down the code!</strong></p>
<h2 id="heading-stdatomic">std::atomic:</h2>
<p><code>std::atomic</code> provides a way to perform thread-safe operations on variables without requiring locks. It ensures that operations are performed atomically, avoiding data races.</p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;thread&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;atomic&gt;</span></span>

<span class="hljs-function"><span class="hljs-built_in">std</span>::atomic&lt;<span class="hljs-keyword">int</span>&gt; <span class="hljs-title">counter</span><span class="hljs-params">(<span class="hljs-number">0</span>)</span></span>;

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">incrementCounter</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-comment">// can handle writes and reads from multiple threads, without an atomic this would return a wrong, garbage value</span>
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">10000</span>; ++i) {
    <span class="hljs-comment">// atomically adding 1, same as +=.</span>
    counter.fetch_add(<span class="hljs-number">1</span>, <span class="hljs-built_in">std</span>::memory_order::memory_order_relaxed);
  }
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-function"><span class="hljs-built_in">std</span>::thread <span class="hljs-title">t1</span><span class="hljs-params">(incrementCounter)</span></span>;
  <span class="hljs-function"><span class="hljs-built_in">std</span>::thread <span class="hljs-title">t2</span><span class="hljs-params">(incrementCounter)</span></span>;

  t1.join();
  t2.join();

  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Final value of counter: "</span> &lt;&lt; counter &lt;&lt; <span class="hljs-built_in">std</span>::<span class="hljs-built_in">endl</span>; <span class="hljs-comment">// 20,000, clean.</span>

  <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<h2 id="heading-stdthread">std::thread:</h2>
<p>A std::thread is a fundamental building block of concurrent programming in C++. It represents a separate execution context that can run concurrently with other threads in a program. You can create a thread by passing it a function to execute, and once that function finishes executing, the thread will automatically be terminated.</p>
<p>However, it's crucial to manage the lifecycle of std::thread objects properly. When you create a std::thread, you should ensure that you join it or detach it before it goes out of scope. Failing to do so can lead to resource leaks or undefined behavior. Typically, you use std::join to wait for a thread to finish its execution before proceeding with the rest of the program. However, this is the old approach...</p>
<h2 id="heading-stdjthread">std::jthread:</h2>
<p><code>std::jthread</code> is a modern alternative to <code>std::thread</code> introduced in C++20. It automatically joins when it goes out of scope, simplifying thread management and reducing the risk of resource leaks.</p>
<h2 id="heading-stdstoptoken-stdstopsource-stdstopcallback">std::stop_token, std::stop_source, std::stop_callback:</h2>
<p>These are used for collaborative thread stopping. A <code>std::stop_token</code> allows a thread to check if a stop request has been made, while a <code>std::stop_source</code> is used to request the stop. <code>std::stop_callback</code> provides a way to register callbacks to be executed when a stop request is made.</p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;chrono&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;thread&gt;</span></span>

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-built_in">std</span>::stop_source stopSrc;

  <span class="hljs-function"><span class="hljs-built_in">std</span>::stop_callback <span class="hljs-title">cb</span><span class="hljs-params">(stopSrc.get_token(),
                        []() { <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Callback called!\n"</span>; })</span></span>;

  <span class="hljs-comment">// We do some work in the lambda and pass a stop_token to it.</span>
  <span class="hljs-function"><span class="hljs-built_in">std</span>::jthread <span class="hljs-title">jt0</span><span class="hljs-params">([](<span class="hljs-keyword">const</span> <span class="hljs-built_in">std</span>::stop_token&amp; tk) {
    <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">1'000'000'000</span>; ++i) {
      <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Printing value: "</span> &lt;&lt; i &lt;&lt; <span class="hljs-string">'\n'</span>;

      <span class="hljs-comment">// If stopSrc.request_stop() is called, the token is stopped.</span>
      <span class="hljs-keyword">if</span> (tk.stop_requested()) {
        <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Thread stopped!\n"</span>;
        <span class="hljs-keyword">return</span>;
      }
    }
  })</span></span>;

  <span class="hljs-comment">// Sleep the main thread for 2 seconds, so that the jthread can do some work.</span>
  <span class="hljs-built_in">std</span>::this_thread::sleep_for(<span class="hljs-built_in">std</span>::chrono::seconds(<span class="hljs-number">2</span>));

  <span class="hljs-comment">// Requesting the associated token (and thread(s)) to stop.</span>
  stopSrc.request_stop();
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Request to stop thread!\n"</span>;

  <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<h2 id="heading-stdcountingsemaphore">std::counting_semaphore:</h2>
<p>A std::semaphore is a synchronization primitive, like a bathroom with limited capacity. The parameter &lt;<code>x</code>\&gt; represents the maximum capacity of the semaphore, <em>indicating the maximum number of entities (threads, for example) that can access a shared resource simultaneously.</em></p>
<p>The parameter (<code>y</code>) specifies the <em>number of entities that can enter the semaphore at a time</em>, known as the semaphore's "count." Basically a more complete mutex with notifications, and more capacity.</p>
<p><code>x</code> and <code>y</code> are normally the same value, because it would not make a lot of sense having a house where 3 can enter but only one at a time can do, or does it makes sense?</p>
<p>For instance, a <strong>binary semaphore</strong> (std::semaphore&lt;1&gt;) allows only one entity to access the resource at a time, <em>mimicking the behavior of a single-stall bathroom where occupancy is restricted to one person</em> (binary semaphore).</p>
<p><em>On the other hand, a semaphore with a capacity greater than one (e.g., std::semaphore&lt;2&gt;) permits multiple entities to access the resource simultaneously.</em></p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;thread&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;semaphore&gt;</span></span>

<span class="hljs-comment">// Semaphore with a maximum count of 3, capacity of 3.</span>
<span class="hljs-function"><span class="hljs-built_in">std</span>::counting_semaphore&lt;3&gt; <span class="hljs-title">semaphore</span><span class="hljs-params">(<span class="hljs-number">3</span>)</span></span>;

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">accessResource</span><span class="hljs-params">(<span class="hljs-keyword">const</span> <span class="hljs-keyword">int</span> id)</span> </span>{
  <span class="hljs-comment">// Acquire a permit from the semaphore</span>
  ::semaphore.acquire();
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Thread "</span> &lt;&lt; id &lt;&lt; <span class="hljs-string">" has the permit.\n"</span>;
  <span class="hljs-comment">// Simulate accessing the resource</span>
  <span class="hljs-built_in">std</span>::this_thread::sleep_for(<span class="hljs-built_in">std</span>::chrono::seconds(<span class="hljs-number">5</span>));
  <span class="hljs-comment">// Release the permit back to the semaphore</span>
  ::semaphore.release();
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Thread "</span> &lt;&lt; id &lt;&lt; <span class="hljs-string">" has released the permit.\n"</span>;
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-function"><span class="hljs-built_in">std</span>::jthread <span class="hljs-title">t1</span><span class="hljs-params">(::accessResource, <span class="hljs-number">1</span>)</span></span>;
  <span class="hljs-function"><span class="hljs-built_in">std</span>::jthread <span class="hljs-title">t2</span><span class="hljs-params">(::accessResource, <span class="hljs-number">2</span>)</span></span>;
  <span class="hljs-function"><span class="hljs-built_in">std</span>::jthread <span class="hljs-title">t3</span><span class="hljs-params">(::accessResource, <span class="hljs-number">3</span>)</span></span>;
  <span class="hljs-comment">// Fourth thread trying to access the resource</span>
  <span class="hljs-comment">// Will be blocked until one of the first three threads releases the permit</span>
  <span class="hljs-function"><span class="hljs-built_in">std</span>::jthread <span class="hljs-title">t4</span><span class="hljs-params">(::accessResource, <span class="hljs-number">4</span>)</span></span>;
  <span class="hljs-comment">// Output:</span>
  <span class="hljs-comment">//Thread 2 has the permit.</span>
  <span class="hljs-comment">//Thread 3 has the permit.</span>
  <span class="hljs-comment">//Thread 1 has the permit.</span>
  <span class="hljs-comment">//1 has released the permit.</span>
  <span class="hljs-comment">//2 has released the permit.</span>
  <span class="hljs-comment">//3 has released the permit.</span>
  <span class="hljs-comment">//Thread 4 has the permit.</span>
  <span class="hljs-comment">//Thread 4 has released the permit.</span>

  <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<h2 id="heading-stdbinarysemaphore">std::binary_semaphore:</h2>
<p><strong>std::binary_semaphore is a typedef introduced  for std::semaphore&lt;1&gt;.</strong> It is a synchronization primitive similar to a semaphore but with a capacity of 1. *In other words, it allows only one entity (such as a thread) to access a shared resource at a time.<br />*<br />Similarly to a mutex, a binary semaphore can be used to <em>protect critical sections of code or shared resources from concurrent access</em>. However, compared to a mutex, a <em>binary semaphore provides additional functionality such as signaling and waitin</em>g, making it suitable for more complex synchronization scenarios.</p>
<p>For example, in a producer-consumer scenario, a binary semaphore can be used to control access to a shared buffer. <em>The producer signals the semaphore when it adds data to the buffer, and the consumer waits for the semaphore to be signaled before accessing the buffer.</em> This ensures that the producer and consumer do not access the buffer simultaneously.</p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;thread&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;semaphore&gt;</span></span>

<span class="hljs-meta">#<span class="hljs-meta-keyword">define</span> SIMUL_WORK std::this_thread::sleep_for(std::chrono::milliseconds(1000))</span>

<span class="hljs-comment">// An empty bathroom, ready for 1 to enter and produce.</span>
<span class="hljs-built_in">std</span>::binary_semaphore producer{ <span class="hljs-number">1</span> };
<span class="hljs-comment">// A full bathroom (0 can enter), when 1 releases, 1 can enter.</span>
<span class="hljs-comment">// Someone must produce first, and release the bathroom for the consumer.</span>
<span class="hljs-built_in">std</span>::binary_semaphore consumer{ <span class="hljs-number">0</span> };

[[noreturn]] <span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">produce</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">5</span>; ++i) {
    <span class="hljs-comment">// We enter the producer bathroom, no one can enter.</span>
    ::producer.acquire();
    SIMUL_WORK;
    <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Produced\n"</span>;
    <span class="hljs-comment">// We leave the consumer bathroom, so 1 can enter.</span>
    ::consumer.release();
  }
}

[[noreturn]] <span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">consume</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">5</span>; ++i) {
    <span class="hljs-comment">// We enter the consumer bathroom, no one can enter.</span>
    ::consumer.acquire();
    <span class="hljs-comment">// Simulate consumption time</span>
    SIMUL_WORK;
    <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Consumed\n"</span>;
    <span class="hljs-comment">// We leave the producer bathroom, so 1 can enter.</span>
    ::producer.release();
  }
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-function"><span class="hljs-built_in">std</span>::jthread <span class="hljs-title">t1</span><span class="hljs-params">(::produce)</span></span>;
  <span class="hljs-function"><span class="hljs-built_in">std</span>::jthread <span class="hljs-title">t2</span><span class="hljs-params">(::consume)</span></span>;
<span class="hljs-comment">// We loop forever, so we can see the threads working.</span>
<span class="hljs-comment">// One thread enters the bathroom, produces, leaves the other:</span>
<span class="hljs-comment">// letting the other enter the other bathroom and consume.</span>
<span class="hljs-comment">// When consumed, it leaves the other bathroom,</span>
<span class="hljs-comment">// letting the other enter the other bathroom and produce...</span>
<span class="hljs-comment">// We could sprinkle some mutexes to make sure the threads are</span>
<span class="hljs-comment">// not stepping on each other, but we don't need to here.</span>
}
</code></pre>
<h2 id="heading-stdatomicflag">std::atomic_flag:</h2>
<p>std::atomic_flag  provides atomic operations on a <strong>boolean</strong> value. Unlike std::atomic&lt;bool&gt;, std::atomic_flag has <strong>less functionality</strong> but is more lightweight and suitable for simple atomic operations.</p>
<p>One of the primary uses of std::atomic_flag is for implementing <strong>spin locks</strong>, where threads repeatedly check the state of the flag until it changes. This makes std::atomic_flag ideal for scenarios where you need <em>lightweight synchronization without the overhead of a mutex or a condition variable.</em></p>
<p>It's worth noting that for more complex synchronization scenarios or <em>when additional functionality is required</em>, std::condition_variable may be a better choice. std::condition_variable allows threads to wait efficiently for a condition to become true, providing more flexibility and options for signaling between threads.</p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;atomic&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;thread&gt;</span></span>

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">critical_section</span><span class="hljs-params">(<span class="hljs-keyword">const</span> <span class="hljs-keyword">int</span> id)</span> </span>{

  <span class="hljs-keyword">static</span> <span class="hljs-built_in">std</span>::atomic_flag lock = ATOMIC_FLAG_INIT;
  <span class="hljs-keyword">static</span> <span class="hljs-keyword">unsigned</span> <span class="hljs-keyword">int</span> spin_count = <span class="hljs-number">1</span>;

  <span class="hljs-comment">// this writes despite being the same value, cache miss +60%, use load.</span>
  <span class="hljs-keyword">while</span> (lock.test_and_set(<span class="hljs-built_in">std</span>::memory_order_acquire)) <span class="hljs-comment">// no read/ write operation can be reordered before this operation. ensures consistency</span>
  {

    <span class="hljs-built_in">std</span>::this_thread::sleep_for(<span class="hljs-built_in">std</span>::chrono::milliseconds(spin_count));
    <span class="hljs-comment">//incremental backoff</span>
    <span class="hljs-comment">// though unif_dist(1,1024) (random backoff) performs better in cache misses</span>
    spin_count &lt;&lt;= <span class="hljs-number">1</span>;

    lock.clear(<span class="hljs-built_in">std</span>::memory_order_release); <span class="hljs-comment">// no read/ write operation can be reordered after this operation. ensures consistency</span>
    <span class="hljs-comment">//ensures that modifications made to the shared resource are visible to other threads after the lock is released.</span>

  }

  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Thread "</span> &lt;&lt; id &lt;&lt; <span class="hljs-string">" entered critical section\n"</span>;
  <span class="hljs-built_in">std</span>::this_thread::sleep_for(<span class="hljs-built_in">std</span>::chrono::milliseconds(<span class="hljs-number">1000</span>));
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Thread "</span> &lt;&lt; id &lt;&lt; <span class="hljs-string">" exited critical section\n"</span>;

  lock.clear(<span class="hljs-built_in">std</span>::memory_order_release);
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{

  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">5</span>; ++i) {
    <span class="hljs-built_in">std</span>::jthread(::critical_section, i);
  }

}
</code></pre>
<h2 id="heading-scoped-lock">Scoped lock:</h2>
<p><strong>Deadlock avoiding</strong>. <em>Can lock multiple locks at multiple times</em> and <strong>unlocks in the end of block.</strong> Uses std::lock to atomically lock the threads to avoid deadlocks.  <strong>It's a lightweight alternative basically equal to lock guard but can lock multiple threads at once.</strong></p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;thread&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;mutex&gt;</span></span>

<span class="hljs-built_in">std</span>::mutex g_mutex1;
<span class="hljs-built_in">std</span>::mutex g_mutex2;

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">incrementCounter</span><span class="hljs-params">(<span class="hljs-keyword">int</span> i)</span> </span>{
  <span class="hljs-comment">// will lock onto the two mutexes, look at syntax! ITS A WRAPPER!</span>
  <span class="hljs-function"><span class="hljs-built_in">std</span>::scoped_lock <span class="hljs-title">lock</span><span class="hljs-params">(g_mutex1, g_mutex2)</span></span>;
  <span class="hljs-comment">// the first will enter, the second will wait</span>
  <span class="hljs-built_in">std</span>::this_thread::sleep_for(<span class="hljs-built_in">std</span>::chrono::seconds(<span class="hljs-number">3</span>));
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Thread ID "</span> &lt;&lt; i &lt;&lt; <span class="hljs-string">" is running\n"</span>;
} <span class="hljs-comment">// scoped lock will unlock the mutexes</span>

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-function"><span class="hljs-built_in">std</span>::thread <span class="hljs-title">t1</span><span class="hljs-params">(incrementCounter, <span class="hljs-number">0</span>)</span></span>;
  <span class="hljs-function"><span class="hljs-built_in">std</span>::thread <span class="hljs-title">t2</span><span class="hljs-params">(incrementCounter, <span class="hljs-number">1</span>)</span></span>;

  t1.join();
  t2.join();

  <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<h2 id="heading-shared-lock">Shared lock</h2>
<p>A lock for sharing. Multiple readers can access, but only 1 writer, at a time. <strong>We either exclusively lock for 1 writer thread, or if there's no writers, we can share the thread across multiple readers.</strong></p>
<p>For example, we got 6 readers, writers wait.</p>
<p><em>When writer is writing, no readers can share the lock, because writer exclusively holds onto the lock.</em></p>
<p>Either one of  writers or readers at a time. This come in another flavor called shared timed mutex that adds the possibility to wait for certain time before forgetting the lock and continuing, and if there's no more to do it dies. <em>For example, a reader could be willing to wait for 4 seconds, after that it leaves and works until the function dies.</em></p>
<p><em>There's no reader/writers limit. A shared lock inside has a unique lock for the writer and other shared lock for the readers internally.</em></p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;thread&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;mutex&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;shared_mutex&gt;</span></span>

<span class="hljs-built_in">std</span>::shared_mutex rw_mutex;
<span class="hljs-keyword">int</span> shared_data = <span class="hljs-number">0</span>;

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">reader</span><span class="hljs-params">(<span class="hljs-keyword">int</span> id)</span> </span>{
  <span class="hljs-keyword">while</span> (<span class="hljs-literal">true</span>) {
    <span class="hljs-comment">// Lock shared access (multiple readers allowed)</span>
    <span class="hljs-function"><span class="hljs-built_in">std</span>::shared_lock&lt;<span class="hljs-built_in">std</span>::shared_mutex&gt; <span class="hljs-title">lock</span><span class="hljs-params">(rw_mutex)</span></span>;
    <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Reader "</span> &lt;&lt; id &lt;&lt; <span class="hljs-string">" read shared data: "</span> &lt;&lt; shared_data;

    <span class="hljs-comment">// Simulate reading</span>
    <span class="hljs-built_in">std</span>::this_thread::sleep_for(<span class="hljs-built_in">std</span>::chrono::milliseconds(<span class="hljs-number">100</span>));
  }
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">writer</span><span class="hljs-params">(<span class="hljs-keyword">int</span> id)</span> </span>{
  <span class="hljs-keyword">while</span> (<span class="hljs-literal">true</span>) {
    <span class="hljs-comment">// Lock exclusive access (only one writer allowed). Writer cannot enter if there's readers and vice versa.</span>
    <span class="hljs-function"><span class="hljs-built_in">std</span>::unique_lock&lt;<span class="hljs-built_in">std</span>::shared_mutex&gt; <span class="hljs-title">lock</span><span class="hljs-params">(rw_mutex)</span></span>;
    <span class="hljs-comment">// Increment shared data</span>
    shared_data++;
    <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Writer "</span> &lt;&lt; id &lt;&lt; <span class="hljs-string">" incremented shared data to: "</span> &lt;&lt; shared_data;

    <span class="hljs-comment">// Simulate writing</span>
    <span class="hljs-built_in">std</span>::this_thread::sleep_for(<span class="hljs-built_in">std</span>::chrono::milliseconds(<span class="hljs-number">200</span>));
  }
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-comment">// Create reader threads</span>
  <span class="hljs-built_in">std</span>::thread readers[<span class="hljs-number">6</span>];
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">6</span>; ++i) {
    readers[i] = <span class="hljs-built_in">std</span>::thread(reader, i);
  }

  <span class="hljs-comment">// Create writer thread</span>
  <span class="hljs-function"><span class="hljs-built_in">std</span>::thread <span class="hljs-title">writerThread</span><span class="hljs-params">(writer, <span class="hljs-number">1</span>)</span></span>;

  <span class="hljs-comment">// Join threads</span>
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">6</span>; ++i) {
    readers[i].join();
  }
  writerThread.join();

  <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<h2 id="heading-lock-guard">Lock guard</h2>
<p>Lock guard is a <strong>lightweight alternative to unique_lock</strong> in C++. It is particularly useful when you <strong>need to lock a scope</strong> and <strong>don't intend to manually assign or release the lock.</strong> Unlike unique_lock, which offers more flexibility and options, lock_guard provides a simpler interface with less overhead. It automatically locks the associated mutex upon construction and releases it upon destruction. Lock guard is commonly used in straightforward scenarios where <strong>manual lock management is unnecessary,</strong> helping to keep code concise and efficient, like the majority of the code samples here. Useful for simple multithreaded code. <strong>Note that you can't unlock the lock guard manually.</strong></p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;thread&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;mutex&gt;</span></span>

<span class="hljs-built_in">std</span>::mutex mtx;

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">printNumbers</span><span class="hljs-params">(<span class="hljs-keyword">int</span> id)</span> </span>{
  <span class="hljs-comment">// Lock ends at the end of the scope, no need to unlock or assign,</span>
  <span class="hljs-comment">// So we use this cheap wrapper instead of unique_lock</span>
  <span class="hljs-function"><span class="hljs-built_in">std</span>::lock_guard&lt;<span class="hljs-built_in">std</span>::mutex&gt; <span class="hljs-title">lock</span><span class="hljs-params">(mtx)</span></span>;
  <span class="hljs-comment">// Critical section</span>
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i = <span class="hljs-number">1</span>; i &lt;= <span class="hljs-number">5</span>; ++i) {
    <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Thread "</span> &lt;&lt; id &lt;&lt; <span class="hljs-string">" prints: "</span> &lt;&lt; i &lt;&lt; <span class="hljs-string">'\n'</span>;
  } <span class="hljs-comment">// lock is released here automatically</span>
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-function"><span class="hljs-built_in">std</span>::thread <span class="hljs-title">t1</span><span class="hljs-params">(::printNumbers, <span class="hljs-number">1</span>)</span></span>;
  <span class="hljs-function"><span class="hljs-built_in">std</span>::thread <span class="hljs-title">t2</span><span class="hljs-params">(::printNumbers, <span class="hljs-number">2</span>)</span></span>;
  t1.join();
  t2.join();

  <span class="hljs-comment">// Output:</span>
  <span class="hljs-comment">// Thread 1 prints: 1</span>
  <span class="hljs-comment">// Thread 1 prints: 2</span>
  <span class="hljs-comment">// Thread 1 prints: 3</span>
  <span class="hljs-comment">// Thread 1 prints: 4</span>
  <span class="hljs-comment">// Thread 1 prints: 5</span>
  <span class="hljs-comment">// Thread 2 prints: 1</span>
  <span class="hljs-comment">// Thread 2 prints: 2</span>
  <span class="hljs-comment">// Thread 2 prints: 3</span>
  <span class="hljs-comment">// Thread 2 prints: 4</span>
  <span class="hljs-comment">// Thread 2 prints: 5</span>
}
</code></pre>
<h2 id="heading-stdonceflag-and-stdcallonce">std::once_flag and std::call_once:</h2>
<p>These are useful functions to ensure that a specific function is executed only once, <em>regardless of how many times it is called from different threads or contexts.</em></p>
<p>std::once_flag serves as a synchronization flag to coordinate the execution of a function across multiple threads. <strong>It ensures that the function associated with it is called exactly once, even in the presence of concurrent access.</strong></p>
<p>std::call_once is the function used in conjunction with std::once_flag to achieve this behavior. It takes a reference to a std::once_flag and a callable object (usually a function or a lambda expression) as arguments. The first time std::call_once is called with a particular std::once_flag, <em>it executes the associated function, and subsequent calls to std::call_once with the same std::once_flag are ignored.</em></p>
<p><strong>There's no way to reset a once flag. One use only.</strong></p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;iostream&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;mutex&gt;</span></span>

<span class="hljs-built_in">std</span>::once_flag flag;

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">do_once</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-built_in">std</span>::<span class="hljs-built_in">cout</span> &lt;&lt; <span class="hljs-string">"Called only once!\n"</span>;
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-keyword">while</span> (<span class="hljs-literal">true</span>) {
    <span class="hljs-comment">// This function will be called only once, rest of calls will be ignored.    //no stack overflow    std::call_once(::flag, ::do_once);</span>
  }
}
</code></pre>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhX8pWg0NVs9ltSnE9YqcEz-qxLV2GKCGRasLPtEhm202_HwNMHMqRipFbTHrK8jkr0O6UQSg0qAoJzkUu4H_t3nCmz-jzlcdUY_-jIUbNrV8JDY4Ra7hRNYFQcn6j2dijchDP9LDuVRMOsSOBBqUhKpoXfPDYIVLOddlZLQaXvg7dFsN0ohOkEOn5Y_hI/s1600-rw/download.jpg" alt /></p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>Today, we delved on some of the fundamentals of concurrency in C++, we saw <strong>threads, atomics, barriers, latches, locks and all in between</strong>. These are fundamental for complex software and games where we need to leverage threading in performance critical applications. The quiz awaits!</p>
<h2 id="heading-quiz">Quiz</h2>
<p>A. What is the purpose of a std::recursive_mutex?</p>
<p>A) To synchronize access to shared resources across multiple threads.<br />B) To ensure only one thread can access a critical section at a time.<br />C) To allow a thread to lock the same mutex multiple times without causing a deadlock.<br />D) To create a one-time barrier for synchronizing multiple threads.</p>
<p>B. What is the main advantage of using std::jthread over std::thread?</p>
<p>A) std::jthread is more lightweight than std::thread.<br />B) std::jthread allows for better control over thread priority.<br />C) std::jthread provides better error handling for thread creation.<br />D) std::jthread automatically joins the thread when it goes out of scope.</p>
<p>C. What is the purpose of a std::stop_token?</p>
<p>A) To provide a mechanism for threads to stop execution gracefully.<br />B) To synchronize access to critical sections of code.<br />C) To prevent other threads from accessing shared resources.<br />D) To control the execution order of threads.</p>
<p>D. What is the difference between a std::latch and a std::barrier?</p>
<p>A) std::barrier can synchronize an arbitrary number of threads, while std::latch is limited to a fixed number.<br />B) std::latch is used for thread signaling, while std::barrier is used for mutual exclusion.<br />C) std::barrier allows threads to proceed once a certain number of threads have reached a point in the code, while std::latch blocks until all threads have reached a point.<br />D) std::latch is reusable, while std::barrier is one-time use only.</p>
<p>E. What is the purpose of std::atomic?</p>
<p>A) To provide a mechanism for thread creation.<br />B) To create atomic operations on fundamental data types.<br />C) To synchronize access to shared resources.<br />D) To implement locking mechanisms.</p>
<p>F. What is the purpose of a std::semaphore in C++ concurrency?</p>
<p>A) To provide a mechanism for threads to stop execution gracefully.<br />B) To create atomic operations on fundamental data types.<br />C) To synchronize access to critical sections of code.<br />D) To create a one-time barrier for synchronizing multiple threads.</p>
<h2 id="heading-answers">Answers</h2>
<p>A. <code>C</code></p>
<p>B. <code>D</code></p>
<p>C. <code>A</code></p>
<p>D. <code>D</code></p>
<p>E. <code>B</code></p>
<p>F. <code>C</code></p>
<h2 id="heading-related-resources">Related Resources</h2>
<ul>
<li><p><a target="_blank" href="https://www.manning.com/books/c-plus-plus-concurrency-in-action-second-edition">C++ Concurrency in Action by Anthony Williams</a></p>
</li>
<li><p><a target="_blank" href="https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-concurrency">C++ Core Guidelines - Concurrency Section</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/playlist?list=PLk6CEY9XxSIAeK-EAh3hB4fgNvYkYmghp">CppNuts Multithreading Playlist</a></p>
</li>
</ul>
<p>Thank you for following along with this tutorial on thread synchronization in C++. I hope you found the information valuable and applicable to your projects. Happy coding!</p>
]]></content:encoded></item></channel></rss>