<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[SurajOnCloud]]></title><description><![CDATA[I learn and build cloud projects on AWS. Sharing the errors I make, how I fix them, and simple backend/serverless guides for devs who want real, practical cloud]]></description><link>https://blog.surajv.dev</link><generator>RSS for Node</generator><lastBuildDate>Wed, 29 Apr 2026 12:41:12 GMT</lastBuildDate><atom:link href="https://blog.surajv.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Understanding Batch Processing: A Beginner's Guide]]></title><description><![CDATA[Imagine being a celebrity with a massive following. Every time you post, you receive an overwhelming number of likes. Now, picture getting 100,000 likes within minutes. Would Instagram update its database 100,000 times in one minute to reflect this?
...]]></description><link>https://blog.surajv.dev/understanding-batch-processing-a-beginners-guide</link><guid isPermaLink="true">https://blog.surajv.dev/understanding-batch-processing-a-beginners-guide</guid><category><![CDATA[Batch Processing]]></category><category><![CDATA[backend developments]]></category><dc:creator><![CDATA[Suraj vishwakarma]]></dc:creator><pubDate>Wed, 17 Dec 2025 20:31:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765282866321/062aa5d8-c016-4618-be22-cc685cafb925.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine being a celebrity with a massive following. Every time you post, you receive an overwhelming number of likes. Now, picture getting 100,000 likes within minutes. Would Instagram update its database 100,000 times in one minute to reflect this?</p>
<h2 id="heading-the-reality-of-database-updates">The Reality of Database Updates</h2>
<p>When a celebrity posts a photo and receives 100,000 likes in minutes, a critical question arises:</p>
<blockquote>
<p>Would Instagram really hit its database 100,000 times in one minute just to update a like counter?</p>
</blockquote>
<p>The answer is simple: <strong>no — and it shouldn’t.</strong></p>
<p>This is where <strong>batch processing</strong> becomes essential.</p>
<p>In this guide, we’ll explore:</p>
<ul>
<li><p>The pitfalls of naïve database updates at scale</p>
</li>
<li><p>How large systems manage massive write traffic</p>
</li>
<li><p>A practical batch-processing architecture</p>
</li>
<li><p>Implementation using <strong>Redis and worker-based batching</strong></p>
</li>
</ul>
<p>This is not just theory; it’s how production systems operate.</p>
<h2 id="heading-the-naive-approach-and-its-limitations">The Naïve Approach and Its Limitations</h2>
<p>A straightforward method to handle likes involves:</p>
<ol>
<li><p>User clicks Like</p>
</li>
<li><p>Backend increments <code>likes_count</code> in the database</p>
</li>
<li><p>Immediate database write</p>
</li>
</ol>
<h3 id="heading-challenges-at-scale">Challenges at Scale</h3>
<p>If 100,000 users like a post within a minute, this results in:</p>
<ul>
<li><p>100,000 database write operations</p>
</li>
<li><p>Heavy lock contention on the same row</p>
</li>
<li><p>Increased latency for all users</p>
</li>
<li><p>Risk of database throttling or outages</p>
</li>
</ul>
<p>Relational databases are not designed for extremely high-frequency writes on the same record. If Instagram followed this approach, their database would struggle to cope.</p>
<h2 id="heading-how-large-systems-address-the-problem">How Large Systems Address the Problem</h2>
<p>Big systems adhere to a key principle:</p>
<blockquote>
<p><strong>User experience must be fast; database writes can be delayed.</strong></p>
</blockquote>
<p>A like doesn’t need to be immediately stored in permanent storage. A short delay is acceptable and invisible to users.</p>
<p>The strategy involves:</p>
<ul>
<li><p>Quickly accepting likes</p>
</li>
<li><p>Storing them in a fast in-memory system</p>
</li>
<li><p>Persisting them to the database in batches</p>
</li>
</ul>
<h2 id="heading-understanding-batch-processing">Understanding Batch Processing</h2>
<p><strong>Batch processing</strong> involves:</p>
<ul>
<li><p>Collecting multiple events over time</p>
</li>
<li><p>Processing them together as a group</p>
</li>
<li><p>Dramatically reducing system load</p>
</li>
</ul>
<p>Instead of:</p>
<blockquote>
<p>100,000 likes → 100,000 database writes</p>
</blockquote>
<p>We achieve:</p>
<blockquote>
<p>100,000 likes → 1 Redis counter → 1 batched database write</p>
</blockquote>
<p>This results in a <strong>100,000x improvement</strong> in write efficiency.</p>
<h2 id="heading-architectural-overview">Architectural Overview</h2>
<p>Here’s the high-level architecture:</p>
<ol>
<li><p>User clicks Like</p>
</li>
<li><p>Backend updates Redis (fast, in-memory)</p>
</li>
<li><p>A background worker runs periodically</p>
</li>
<li><p>Worker reads accumulated likes from Redis</p>
</li>
<li><p>Worker updates the database in batches</p>
</li>
</ol>
<h2 id="heading-why-choose-redis">Why Choose Redis?</h2>
<p>Redis is ideal for this scenario because:</p>
<ul>
<li><p>It’s in-memory, making it extremely fast</p>
</li>
<li><p>Supports atomic operations (<code>INCR</code>)</p>
</li>
<li><p>Can handle millions of operations per second</p>
</li>
<li><p>Temporary data storage is acceptable</p>
</li>
</ul>
<p>A database ensures <strong>durability</strong>, while Redis provides <strong>speed</strong>.</p>
<h2 id="heading-storing-likes-in-redis">Storing Likes in Redis</h2>
<p>When a user likes a post:</p>
<pre><code class="lang-typescript">redis.incr(<span class="hljs-string">`post:likes:<span class="hljs-subst">${postId}</span>`</span>)
</code></pre>
<p>Benefits include:</p>
<ul>
<li><p>O(1) operation</p>
</li>
<li><p>No database lock</p>
</li>
<li><p>Immediate user response</p>
</li>
</ul>
<p>At this stage:</p>
<ul>
<li><p>The UI can display the updated count</p>
</li>
<li><p>The database remains untouched</p>
</li>
</ul>
<h2 id="heading-the-role-of-the-batch-worker">The Role of the Batch Worker</h2>
<p>A background worker runs every few seconds or minutes.</p>
<h3 id="heading-worker-responsibilities">Worker Responsibilities</h3>
<ol>
<li><p>Fetch all like counters from Redis</p>
</li>
<li><p>Aggregate them</p>
</li>
<li><p>Write updates to the database</p>
</li>
<li><p>Reset Redis counters</p>
</li>
</ol>
<p>Pseudo-flow:</p>
<pre><code class="lang-apache"><span class="hljs-attribute">for</span> each postId in redisKeys:
  <span class="hljs-attribute">likes</span> = redis.get(postId)
  <span class="hljs-attribute">UPDATE</span> posts SET likes_count = likes_count + likes
  <span class="hljs-attribute">redis</span>.del(postId)
</code></pre>
<p>This reduces thousands of updates to <strong>one update per post per interval</strong>.</p>
<h2 id="heading-determining-batch-size-and-frequency">Determining Batch Size and Frequency</h2>
<p>This is a design decision:</p>
<ul>
<li><p>Every 5 seconds → more real-time, more database writes</p>
</li>
<li><p>Every 1 minute → fewer database writes, slight delay</p>
</li>
</ul>
<p>Production systems adjust based on:</p>
<ul>
<li><p>Traffic</p>
</li>
<li><p>Database capacity</p>
</li>
<li><p>Acceptable data freshness</p>
</li>
</ul>
<p>Instagram doesn’t require millisecond-accurate likes, nor do most apps.</p>
<h2 id="heading-handling-edge-cases">Handling Edge Cases</h2>
<h3 id="heading-redis-crash">Redis Crash</h3>
<p>If Redis crashes, likes in memory may be lost.</p>
<p>Mitigations:</p>
<ul>
<li><p>Enable Redis persistence (AOF/RDB)</p>
</li>
<li><p>Accept minor data loss for non-critical metrics</p>
</li>
</ul>
<p>Likes are <strong>eventually consistent</strong>, not financial transactions.</p>
<h3 id="heading-worker-failure">Worker Failure</h3>
<p>If a worker crashes mid-batch:</p>
<ul>
<li><p>Redis data remains intact</p>
</li>
<li><p>The next worker run continues processing</p>
</li>
</ul>
<p>This ensures the system is <strong>fault-tolerant</strong>.</p>
<h3 id="heading-duplicate-updates">Duplicate Updates</h3>
<p>Workers must be:</p>
<ul>
<li><p>Idempotent</p>
</li>
<li><p>Or carefully delete Redis keys only after successful database writes</p>
</li>
</ul>
<p>This prevents double-counting likes.</p>
<h2 id="heading-why-this-pattern-is-industry-standard">Why This Pattern is Industry Standard</h2>
<p>This approach is used for:</p>
<ul>
<li><p>Like counters</p>
</li>
<li><p>View counts</p>
</li>
<li><p>Follower counts</p>
</li>
<li><p>Analytics events</p>
</li>
<li><p>Notifications</p>
</li>
</ul>
<p>Any system with <strong>high write frequency</strong> employs batching. Similar patterns are found in:</p>
<ul>
<li><p>Instagram</p>
</li>
<li><p>Twitter</p>
</li>
<li><p>YouTube</p>
</li>
<li><p>Netflix analytics</p>
</li>
</ul>
<h2 id="heading-key-takeaways">Key Takeaways</h2>
<ul>
<li><p>Databases shouldn’t handle extremely high-frequency writes</p>
</li>
<li><p>Redis absorbs traffic spikes</p>
</li>
<li><p>Batch workers ensure system stability</p>
</li>
<li><p>Eventual consistency is acceptable for metrics</p>
</li>
</ul>
<p>For any application that might go viral, <strong>batch processing is essential</strong>.</p>
<h2 id="heading-final-thought">Final Thought</h2>
<p>Next time you see a post jump from 10K to 100K likes instantly, remember:</p>
<p>Behind the scenes, no database is being overwhelmed. A smart batching system is efficiently managing the load.</p>
<p>For students preparing for backend interviews or system design rounds, this pattern is invaluable. Understand it, implement it, and discuss it with confidence.</p>
<h3 id="heading-implementation">Implementation</h3>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/SeOu8Lc0Z_M">https://youtu.be/SeOu8Lc0Z_M</a></div>
]]></content:encoded></item><item><title><![CDATA[what are packets ?]]></title><description><![CDATA[In computer networks, this is one of the most asked and fundamental topics. Let me break it down with a simple example.
Suppose you are sending a video on WhatsApp to your friend. The network you are using is the same network that many other people a...]]></description><link>https://blog.surajv.dev/what-are-packets</link><guid isPermaLink="true">https://blog.surajv.dev/what-are-packets</guid><category><![CDATA[computer networks]]></category><category><![CDATA[packet tracer]]></category><category><![CDATA[packet-switching]]></category><category><![CDATA[Cn]]></category><dc:creator><![CDATA[Suraj vishwakarma]]></dc:creator><pubDate>Thu, 11 Dec 2025 17:29:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765474057017/79229a86-681b-4377-b301-b9b05bc17313.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In computer networks, this is one of the most asked and fundamental topics. Let me break it down with a simple example.</p>
<p>Suppose you are sending a video on WhatsApp to your friend. The network you are using is the same network that many other people are using at the same time. There is no fixed path that guarantees your video will travel directly and reliably to the right person. If a disturbance happens in the middle while sending one big chunk of data, the entire transfer can fail and you would have to send the whole video again</p>
<p>It’s similar to talking on an old wired telephone with a private, end-to-end wire connection. If that wire gets disturbed, the whole communication breaks.<br />But in reality, networks are shared—many users, many routes, lots of traffic.</p>
<p>This is where <strong>packets</strong> come in.</p>
<p>Instead of sending the entire video as one big block, the network <strong>breaks the video into many small chunks</strong>, called <strong>packets</strong>.<br />Each packet contains:</p>
<ol>
<li><p>A small part of the video</p>
</li>
<li><p>Metadata that describes how and where it should be delivered</p>
</li>
</ol>
<p>Think of it like sending a parcel to your friend. You attach important details like the <strong>address</strong>, <strong>sender</strong>, <strong>receiver</strong>, and sometimes <strong>weight</strong>. Without metadata, the parcel wouldn’t know where to go. The same idea applies to packets.</p>
<h2 id="heading-metadata-inside-a-packet">Metadata inside a packet</h2>
<p>Every packet includes three major components:</p>
<p><strong>1. Header</strong></p>
<p>This contains control information such as:</p>
<ul>
<li><p>Source address</p>
</li>
<li><p>Destination address</p>
</li>
<li><p>Sequence number</p>
</li>
<li><p>Protocol</p>
</li>
<li><p>TTL</p>
</li>
<li><p>Checksum</p>
</li>
</ul>
<p>The header tells the network <strong>where the packet came from</strong>, <strong>where it needs to go</strong>, and <strong>how it should be handled</strong>.</p>
<p><strong>2. Payload</strong></p>
<p>This is the <strong>actual data</strong>, the small part of your video being transmitted.</p>
<p><strong>3. Trailer</strong></p>
<p>This usually contains error-checking information, such as CRC, to verify whether the packet was delivered correctly.</p>
<p>By breaking the video into packets, the network can send each small chunk independently through different routes. If one packet gets lost, only that packet needs to be resent—<strong>not the entire video</strong>. This makes communication fast, reliable, and scalable, even when millions of people share the same network.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765473355204/9b9af716-46ef-4e73-8a08-cd59e0762b93.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-ipv4-packet-headers">IPV4 Packet Headers</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765473679430/5c902e7d-48a2-41ee-9fbd-484eed1f713f.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-interview-question">Interview Question</h2>
<p>1. What is a packet?</p>
<p>Explain the definition and why networks use packets instead of sending data as one large block.</p>
<p>2. What are the components of a packet?</p>
<p>Expected parts: Header, Payload, Trailer.</p>
<p>3. What information is stored inside a packet header?</p>
<p>Source IP, Destination IP, TTL, Protocol, Checksum, Sequence Number, etc.</p>
<p>4. What is the payload in a packet?</p>
<p>Explain that it's the actual user data (video chunk, message, file part).</p>
<p>5. What is the role of the trailer in a packet?</p>
<p>Error detection, usually CRC or checksums.</p>
<p>6. Do packets always take the same path to reach the destination? Why or why not?</p>
<p>Explain dynamic routing + path independence.</p>
<p>7. What happens when a packet gets corrupted or lost during transmission?</p>
<p>TCP → retransmits<br />UDP → packet lost permanently</p>
<p>8. What is TTL (Time-To-Live) in a packet and why is it important?</p>
<p>Prevents infinite loops.</p>
<p>9. What is packet fragmentation? Why does it occur?</p>
<p>Triggered when packet size &gt; MTU; packet is split into smaller fragments.</p>
<p>10. Can packets arrive out of order? How is this handled?</p>
<p>Yes. TCP reorders using sequence numbers.</p>
]]></content:encoded></item><item><title><![CDATA[Vertical Scaling vs Horizontal Scaling: What You Need to Know]]></title><description><![CDATA[Well, you have come a long way in your DevOps journey. When your application gets more requests than usual, the server running your application can't handle the load. Traditionally, there are two ways to scale up your server, as you just saw in the b...]]></description><link>https://blog.surajv.dev/vertical-scaling-vs-horizontal-scaling-what-you-need-to-know</link><guid isPermaLink="true">https://blog.surajv.dev/vertical-scaling-vs-horizontal-scaling-what-you-need-to-know</guid><category><![CDATA[vertical scaling]]></category><category><![CDATA[horizontal scaling]]></category><category><![CDATA[autoscaling group]]></category><category><![CDATA[EC2 instance]]></category><category><![CDATA[AWS]]></category><category><![CDATA[nginx]]></category><dc:creator><![CDATA[Suraj vishwakarma]]></dc:creator><pubDate>Mon, 08 Dec 2025 18:17:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765188288168/ec83eae1-082a-4d93-aaa0-62887fed4845.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Well, you have come a long way in your DevOps journey. When your application gets more requests than usual, the server running your application can't handle the load. Traditionally, there are two ways to scale up your server, as you just saw in the banner.</p>
<ul>
<li><p>Vertical scaling - scale up</p>
</li>
<li><p>Horizontal scaling - scale out</p>
</li>
</ul>
<h3 id="heading-vertical-scaling-scale-up">Vertical scaling - scale up</h3>
<p>So one of the possibilities is that if the server is getting more requests and it is getting overloaded, you could monitor and see that your CPU usage is going to 90%. So what you could do is think from first principles. If my CPU and storage are insufficient for my server, then one of the things we can do is get a bigger server with a more powerful CPU and storage, right?</p>
<p>Exactly, this is known as vertical scaling, where we just modify the server and get a bigger server that can handle the compute and complexities of your server. There are advantages to this method, such as being able to migrate to a bigger server without having to change the coding part.</p>
<p>Migration from a small server to a bigger server is not a big deal; you just have to deploy your application on the bigger compute machine. This machine could be any cloud provider compute, whether it may be <strong>EC2 or a droplet,</strong> right? But if you think from a cost perspective, then a bigger server equals more cost, right? Also, keep in mind that if your application grows even bigger, you will have to migrate to an even bigger machine. And if the question arises in your mind that at some point your server might not handle the load and fails, then your whole application will crash. In this case, you might think, why am I just using one bigger server, and why can't I run the same application on 5 small servers rather than one big server? This is where horizontal scaling comes in.</p>
<h3 id="heading-horizontal-scalling-scale-out">Horizontal scalling - scale out</h3>
<p>Well, instead of running the server on one bigger machine, we can run the same application on a bunch of small machines with the same configuration. In this setup, we need to figure out that every machine will have a different <strong>IP address</strong>, and your domain can only point to one machine at a time. This is where a new term comes in, called <strong>Reverse Proxies</strong>.</p>
<p>II will try to break down reverse proxies in simple terms. They are basically the gateway to your application. Think of it as your gate; it says that if you want to reach your application, I am the gate for it. Without me, you can't access it. (Here, we can access the application, but at a time, we can only point to a specific IP of the machine. We can't cater to every machine.) This tool allows you to fix this problem. It says that it will take all your requests coming from users and will transfer or proxy your request to one machine with a lower load, or it can follow some rule of OS scheduling called <strong>Round Robin</strong>. It basically tells you that the next request coming from your user will be forwarded to machine 1, then to machine 2, then to machine 3, one by one to each machine. This way, the load will be distributed.</p>
<h3 id="heading-artitecture-of-reverse-proxies">Artitecture of Reverse Proxies</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765217566243/573568a3-7533-40cf-b725-29d5665d4e1e.png" alt class="image--center mx-auto" /></p>
<p>Again, this has some challenges and benefits. We had to optimize it as well as manage the complexity it will bring.</p>
<p>Below is the table I have provided; you can check it out.</p>
<p>Below is the Comparison Table in short</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Vertical Scaling</td><td>Horizontal Scaling</td></tr>
</thead>
<tbody>
<tr>
<td>Method</td><td>Increase hardware power</td><td>Add more machines</td></tr>
<tr>
<td>Cost</td><td>Expensive at extremes</td><td>Cheaper to scale</td></tr>
<tr>
<td>Failure</td><td>One machine failure = full outage</td><td>Redundancy, no total downtime</td></tr>
<tr>
<td>Complexity</td><td>Easy</td><td>More complex</td></tr>
<tr>
<td>Best For</td><td>Databases, old monolith apps</td><td>Web servers, microservices, cloud apps</td></tr>
</tbody>
</table>
</div><p>Please do like , comment and share</p>
<p>Thanks for reading :)</p>
]]></content:encoded></item><item><title><![CDATA[Complete DevOps Learning Journey Blueprint]]></title><description><![CDATA[Deciding Factor , Why you choose devops ?

💡
Feel free to skip this section it , it includes what are the factors and why you have choose to learn devops . if you are aware and curios to learn feel free to move to next heading


As a student conside...]]></description><link>https://blog.surajv.dev/complete-devops-learning-journey-blueprint</link><guid isPermaLink="true">https://blog.surajv.dev/complete-devops-learning-journey-blueprint</guid><category><![CDATA[Devops]]></category><category><![CDATA[Roadmap]]></category><category><![CDATA[Developer]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><dc:creator><![CDATA[Suraj vishwakarma]]></dc:creator><pubDate>Sun, 07 Dec 2025 14:50:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765119027344/ff7ad1b3-9fe2-49fe-9848-99a404668237.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-deciding-factor-why-you-choose-devops">Deciding Factor , Why you choose devops ?</h2>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Feel free to skip this section it , it includes what are the factors and why you have choose to learn devops . if you are aware and curios to learn feel free to move to next heading</div>
</div>

<p>As a student considering a journey into DevOps, I can relate to your curiosity, as I have been on this path for some time and can offer guidance based on my experience. There are various reasons you might choose to learn DevOps, whether through personal exploration or recommendations from peers.</p>
<p><strong>Common Mistake:</strong> A frequent error is choosing DevOps simply to avoid coding, as some of my friends have done, thinking it's the easier path.</p>
<blockquote>
<p>Let me dispel this myth: DevOps requires coding expertise, with shell scripting and Python being heavily used for automation.</p>
</blockquote>
<p>If you're genuinely curious about how applications are deployed to production, you're in the right place.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">In production we need to interact with Linux operating system mainly ubuntu so it would be good to get adapted to this ecosystems . I recommend you to use the Terminal more now even if you are on window Install WSL in your system you will get ubuntu and use the bash you can customise your shell as per your convience , personally i have been using zsh with oh my zsh it give nice and easy user experience</div>
</div>

<h1 id="heading-taking-first-step">Taking First Step</h1>
<p><strong>Learning DevOps Tools in the Right Order</strong></p>
<p>There are hundreds of tools in the DevOps ecosystem. The challenge is not learning everything, but <strong>learning in the right order</strong> so your foundation stays strong.<br />Here’s a practical learning path that actually aligns with real engineering workflows:</p>
<h3 id="heading-1-git-github"><strong>1) Git + GitHub</strong></h3>
<p>Start version-controlling every project you build.</p>
<blockquote>
<p><strong>Rule:</strong> Push all your code to GitHub. Keep every project well-maintained.</p>
</blockquote>
<p>Why it matters:</p>
<ul>
<li><p>Collaborating with code becomes easier</p>
</li>
<li><p>You learn branching, merging, and pull requests</p>
</li>
<li><p>It builds your portfolio naturally</p>
</li>
</ul>
<p><strong>Key Skills to Learn</strong></p>
<ul>
<li><p>git init, git clone, git status</p>
</li>
<li><p>git add, git commit, git push</p>
</li>
<li><p>Branching &amp; Pull Requests</p>
</li>
</ul>
<h3 id="heading-2-linux-basics-shell"><strong>2) Linux Basics + Shell</strong></h3>
<p>Linux is the backbone of servers and cloud machines.</p>
<p>When you work inside the cloud, you interact through a terminal, so Linux knowledge is non-negotiable.</p>
<blockquote>
<p><strong>Recommendation:</strong> Learn a terminal editor like <code>vim</code> or <code>nano</code>.<br />When you SSH into a server, you won’t have VS Code waiting for you.</p>
</blockquote>
<p><strong>Key Concepts</strong></p>
<ul>
<li><p>File system navigation (cd, ls, rm, mv, cp)</p>
</li>
<li><p>Permissions &amp; users</p>
</li>
<li><p>System services &amp; processes</p>
</li>
<li><p>Networking basics (ping, curl, netstat)</p>
</li>
</ul>
<h3 id="heading-3-choose-a-cloud-provider"><strong>3) Choose a Cloud Provider</strong></h3>
<p>After Linux, pick one cloud to start with.</p>
<blockquote>
<p>For beginners, <strong>AWS is highly recommended</strong> because it’s widely used in the industry and has tons of documentation.</p>
</blockquote>
<p>Use your Linux skills and start doing practical exercises, like:</p>
<ul>
<li><p>Creating EC2 instances</p>
</li>
<li><p>SSH into instances using keys</p>
</li>
<li><p>Hosting simple apps</p>
</li>
</ul>
<p>The goal is to connect Linux knowledge with real cloud servers.</p>
<h3 id="heading-4-docker-containerization"><strong>4) Docker (Containerization)</strong></h3>
<p>Now that you can deploy on the cloud, learn how to deploy efficiently with containers.</p>
<p>Why Docker?</p>
<ul>
<li><p>Reproducible environments</p>
</li>
<li><p>Works the same on local + cloud</p>
</li>
<li><p>Makes CI/CD sooo much easier</p>
</li>
</ul>
<p><strong>Learn</strong></p>
<ul>
<li><p>Dockerfile</p>
</li>
<li><p>Images, containers, volumes, networks</p>
</li>
<li><p>Tagging &amp; pushing images to Docker Hub</p>
</li>
</ul>
<h3 id="heading-5-cicd-github-actions"><strong>5) CI/CD (GitHub Actions)</strong></h3>
<p>Once you know Docker, automate builds and deployments.</p>
<blockquote>
<p>CI/CD is not theory. You must build pipelines that test and deploy your apps automatically.</p>
</blockquote>
<p>Start with <strong>GitHub Actions</strong>, then explore others later.</p>
<p><strong>Important Concepts</strong></p>
<ul>
<li><p>Build pipelines</p>
</li>
<li><p>Deploy pipelines</p>
</li>
<li><p>Secrets management</p>
</li>
</ul>
<h3 id="heading-6-terraform-infrastructure-as-code-this-for-now-you-can-skip-and-can-jumps-back-later-after-learning"><strong>6) Terraform (Infrastructure as Code) (this for now you can skip and can jumps back later after learning )</strong></h3>
<p>Instead of manually creating servers on AWS, automate it.</p>
<p><strong>Terraform teaches you:</strong></p>
<ul>
<li><p>Declarative infra provisioning</p>
</li>
<li><p>Reusability with modules</p>
</li>
<li><p>Working with multi-cloud setups</p>
</li>
</ul>
<p>You should be able to:</p>
<ul>
<li><p>Create VPC, EC2, IAM</p>
</li>
<li><p>Use Terraform variables &amp; outputs</p>
</li>
<li><p>Store state securely (S3 + DynamoDB)</p>
</li>
</ul>
<h3 id="heading-7-kubernetes-orchestration"><strong>7) Kubernetes (Orchestration)</strong></h3>
<p>Once you understand Docker, learn how to scale containers.</p>
<blockquote>
<p>Kubernetes = Running containers at scale.</p>
</blockquote>
<p>Focus on:</p>
<ul>
<li><p>Pods, Deployments, ReplicaSets</p>
</li>
<li><p>Services &amp; Ingress</p>
</li>
<li><p>ConfigMaps &amp; Secrets</p>
</li>
<li><p>Helm (bonus)</p>
</li>
</ul>
<p>Make sure you run apps locally using kind kubectl or Kubernetes on cloud (EKS/GKE).</p>
<h3 id="heading-8-observability-prometheus-grafana"><strong>8) Observability (Prometheus + Grafana)</strong></h3>
<p>If you deploy apps, you must monitor them.</p>
<p><strong>Observability includes:</strong></p>
<ul>
<li><p>Metrics</p>
</li>
<li><p>Logs</p>
</li>
<li><p>Tracing</p>
</li>
</ul>
<p>Prometheus helps collect metrics<br />Grafana visualizes them</p>
<p>Learn how to:</p>
<ul>
<li><p>Monitor applications</p>
</li>
<li><p>Create dashboards</p>
</li>
<li><p>Set up alerts</p>
</li>
</ul>
<h3 id="heading-9-devsecops-basics-trivy-vault"><strong>9) DevSecOps Basics (Trivy + Vault)</strong></h3>
<p>Security should never be an afterthought.</p>
<p><strong>Start with simple tools:</strong></p>
<ul>
<li><p><strong>Trivy</strong> → Scan vulnerabilities</p>
</li>
<li><p><strong>Vault</strong> → Manage secrets</p>
</li>
</ul>
<p>Use Trivy to scan:</p>
<ul>
<li><p>Docker images</p>
</li>
<li><p>Kubernetes workloads</p>
</li>
<li><p>Repositories</p>
</li>
</ul>
<p>Use Vault to store secrets securely.</p>
<h2 id="heading-tldr">TLDR</h2>
<p>The article provides a comprehensive guide for learning DevOps, starting with understanding the reasons for choosing this path. It emphasizes the importance of coding skills, particularly in shell scripting and Python. The learning journey includes mastering Git and GitHub for version control, Linux basics, choosing a cloud provider like AWS, learning Docker for containerization, and setting up CI/CD pipelines with GitHub Actions. It also covers advanced topics like Terraform for infrastructure as code, Kubernetes for container orchestration, observability with Prometheus and Grafana, and DevSecOps basics with tools like Trivy and Vault.</p>
]]></content:encoded></item><item><title><![CDATA[Suraj on cloud First Blog]]></title><description><![CDATA[Well, everybody might be wondering who I am ?
I am Suraj Vishwakarma from Bangalore, currently pursuing a B.Tech in Computer Science from Lovely Professional University. I have always been interested in playing with computers.
I got my first PC when ...]]></description><link>https://blog.surajv.dev/suraj-on-cloud-first-blog</link><guid isPermaLink="true">https://blog.surajv.dev/suraj-on-cloud-first-blog</guid><category><![CDATA[First Blog]]></category><dc:creator><![CDATA[Suraj vishwakarma]]></dc:creator><pubDate>Sat, 06 Dec 2025 16:22:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765043301704/41f29cd8-1a39-419e-b31c-faa65758a2ca.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-well-everybody-might-be-wondering-who-i-am"><strong>Well, everybody might be wondering who I am ?</strong></h2>
<p>I am Suraj Vishwakarma from Bangalore, currently pursuing a <a target="_blank" href="http://B.Tech">B.Tech</a> in Computer Science from Lovely Professional University. I have always been interested in playing with computers.</p>
<p>I got my first PC when I was in 6th grade. I had asked my father for one when I was in 3rd grade, but he eventually gave me a laptop. No one in my family had imagined that it would be the first computer for our entire family, and perhaps no one knew how to use it or had even seen a laptop before.</p>
<p>I learned to use Paint when I was in school. They took the whole class to the computer lab, and the very first task I remember was being asked to type our names. I was unable to find where the letters were located on the keyboard.</p>
<h2 id="heading-from-town-to-big-city">From Town to Big City</h2>
<p>My family moved from Garhwa to Bangalore, and our whole life changed.</p>
]]></content:encoded></item></channel></rss>