DEV Community
dev.to.web.brid.gy
DEV Community
@dev.to.web.brid.gy
A constructive and inclusive social network for software developers. With you every step of your journey.

[bridged from https://dev.to/ on the web: https://fed.brid.gy/web/dev.to ]
DevOps to finance: Explaining CI costs to your CFO
Greater usage of your CI tool is a good thing. But as your toolchain scales, so does the bill, and it's only a matter of time before your finance team starts asking questions (just ask anyone in DevOps who's had to field a call from their CFO explaining a 40% spike in AWS costs). Sadly, justifying your CI spend to finance leadership can sometimes feel like speaking an entirely different language. This post will help you translate technical realities into financial terms, and to position yourself as a strategic partner, instead of 'just' a technologist. ## Quick Glossary **DevOps:** a modern software development set of practices that emphasizes collaboration between development and operations/infrastructure teams. The goal is to release high-quality software faster and more reliably by automating workflows, improving communication, and using CI/CD pipelines. **CI (Continuous Integration):** the practice of automatically building and testing code every time a team member pushes changes to a shared repository. CI helps catch bugs early, speeds up release cycles, and enables frequent deployments. **FinOps (Financial Operations):** a discipline that brings financial accountability to cloud spending. FinOps aligns engineering, finance, and business teams to track, manage, and optimize costs in real time, enabling better decision-making without slowing down development. **Build vs. Buy:** a strategic decision on whether to develop custom internal tools or purchase third-party solutions. This is often a key consideration when evaluating CI platforms and infrastructure, with implications for cost, speed, and flexibility. ## Why this conversation matters in DevOps DevOps has grown to encompass more than just engineering velocity: it's about delivering business value efficiently. And it's the CFO's job to measure that value across the company. They're the ones tasked with aligning budget and strategy, sometimes even overseeing HR, which is a key function in DevOps transformation. When they ask, _"How are our CI tools working so far?"_ they're really asking, _"Has this been a wise allocation of capital and what needs to change to get better?"_ So don't be surprised when you get hauled into a quarterly reporting meeting to justify your team's decisions and how they're panning out. To thrive in these meetings, the best DevOps teams are increasingly adopting a FinOps mindset. They're expanding beyond the remit of 'ship stuff faster' to _ship stuff faster at great business value._ ## How to explain your CI to a CFO Start by zooming out. Situate your CI within the broader DevOps toolchain and ground the conversation in a few fundamentals that everyone can agree on: why we implemented this CI tool in the first place; how we measure success; and how we're tracking. ### Frame your CI spend around three core pillars: * **Why we bought this tool** : Often, it's the CFO who initiated the DevOps process in the first place by asking the CTO and CIO to increase innovation while lowering budgets. Tie the decision back to the speed, quality, and outcome goals that you initially set. Maybe your CI tool enabled your team to jump from weekly to daily deploys. That's the ROI everyone can get behind. * **What metrics we're responsible for** : Stick to a few clear ones: lead time to production, build success rate, mean time to recovery (MTTR), and infrastructure cost per build are all strong options. If they ask for more, align on what they actually care about: reduced support tickets, better feature delivery, and fewer rollbacks. * **Where we're succeeding and where we're not** : Be honest: if builds are slow or flaky, say so, and show how those hiccups have impacted business KPIs. If you've shaved 30% off build times through parallelization or ephemeral environments, highlight it. Or if CI bottlenecks prevented a bug fix from going out expediently, shine a light on how that hurt your customer commitments and what you recommend to make sure it doesn't happen again. This is your opportunity to show how DevOps aligns with company goals. ## How is spend tracking vs. target? This is where you bring data. Compare current monthly CI spend against your forecast, and normalize it against engineering headcount or number of builds. Are costs scaling linearly with team size? Are we running redundant or unnecessary jobs? Are we hitting diminishing returns? ### Summary of 2025 software delivery metrics Metric | P25 | P50 | P75 | AVG | Benchmark ---|---|---|---|---|--- Duration | 38s | 2m 43s | 8m 7s | 11m 2s | 10m Throughput | 2.71 | 1.64 | 1 | 2.86 | 1 Mean time to recovery | 14m 21s | 63m 50s | 22h 5m 12s | 24h 15m 10s | 60m Success rate | | | | 82.15% | 90% _Source: The 2025 State of Software Delivery_ Use benchmarks. Show what best-in-class CI/CD looks like in your industry. Not to copy them, but to demonstrate you understand what 'good' looks like and where you stand. Highlight any recent optimization wins: maybe you switched to ARM-based runners and cut compute costs 40%, or introduced test skipping logic to avoid redundant runs. Or maybe you increased your build times by 40x by using a 3rd party service. It's not just about enabling automation anymore. It's gotta be cost-effective automation. When you walk into a stakeholder meeting and show that your team reduced cloud spend by 20% without impacting performance, you're proving that DevOps is a strategic partner in the success of the business. ## What's our plan going forward? This is where the rubber meets the road. Propose a pragmatic plan to improve efficiency. That might include: * Auditing unused third-party integrations or legacy pipelines * Standardizing ephemeral environments to reduce runtime * Moving from on-demand to reserved compute instances * Consolidating tools * Jettisoning in-house runners for 3rd party services that are faster and offer lower TCO Finally, tie it all back to business outcomes. If you're proposing a tooling upgrade or refactor, explain how it improves time-to-market or engineer throughput. And don't forget the HR angle — DevOps transformation means retraining, restructuring, and readiness for change. The CFO is a key stakeholder in making that happen. On that note, here are a few common questions your CFO might ask, and how to prep an answer for them: * **Why is our CI spend increasing month-over-month?** **Reply:** We're iterating faster and have seen an increase in developer activity, which has naturally driven more builds. However, we're actively managing that growth by optimizing our pipelines. For example, reducing redundant test runs, implementing caching, and moving some jobs to lower-cost compute options. We're tracking spend per build and are working toward keeping that metric stable even as volume grows. * **What's our ROI from our CI investment?** **Reply** : CI helps us detect bugs earlier, release features faster, and reduce incidents in production. Since adopting our current setup, we've improved our deploy frequency by X%, and our lead time to production has dropped from Y days to Z. These improvements directly impact customer satisfaction, engineering throughput, and support load. * **How might we do this more cost effectively?** **Reply:** Yes, and we routinely check in on those options. We've benchmarked internal costs versus vendor solutions and identified areas to optimize. For example, we're evaluating ARM-based runners, ephemeral environments, and usage-based pricing models. We've also looked at build-vs-buy tradeoffs to ensure we're not over-engineering where a vendor tool makes more financial sense. * **How would we know if we're over-resourced for CI?** **Reply:** Our CI system is designed to scale with our team. While we've invested upfront in automation and tooling, that investment reduces manual work and prevents performance bottlenecks. We regularly review usage patterns to ensure we're not over-provisioning, and we've set clear thresholds for scaling resources up or down. ## Final thoughts CI isn't just a technical necessity, it's a vital lever to iterate faster and unlock productivity. Framing your CI spend within the broader goals of innovation, efficiency, and time-to-market helps translate your team's engineering work into business value. When you clearly explain why tools were chosen, what outcomes you're driving, how your costs compare to targets, and what you're doing to optimize, you shift the conversation from cost center to value creation. It's vital to talk your CFO's language when reviewing your CI performance so you can clearly communicate your wins and highlight what your team is doing to improve engineer velocity more efficiently. ## Where Depot fits in When it's time to prove ROI to your CFO, Depot makes the math simple. Our customers typically see 2-40x faster builds and cut their CI spend in half, the kind of concrete numbers that finance teams love to see. Whether you're optimizing Docker builds or GitHub Actions workflows, you can show measurable improvements: faster deployments, lower costs per build, and engineers who aren't waiting around. Start a free trial and bring real data to your next CFO meeting. Author: John Stocks @ Depot
dev.to
June 12, 2025 at 9:15 PM
The Business-First Approach to Cybersecurity: Why Technical Excellence Isn't Enough in 2025
**TL;DR** * Traditional cybersecurity focuses on technical controls but misses business context * The most effective security programs translate technical risks into business language * Combining technical depth with business acumen creates more impactful security outcomes * Real-world examples from my experience bridging marketing and cybersecurity ## The Problem with "Security First" Thinking After years working across digital marketing and cybersecurity, I've noticed something that might surprise you: **the most technically sound security implementations often fail to protect what actually matters**. Here's why: Most cybersecurity professionals are brilliant at identifying vulnerabilities, configuring SIEM systems, and responding to incidents. But they struggle to answer one critical question: _"What business impact does this security decision actually have?"_ ## The Business Context Gap Let me share a real example from my experience: ### The Alert Fatigue Scenario # Traditional approach: Alert on everything if suspicious_login_attempt: trigger_alert() block_user() notify_security_team() # Business-first approach: Context matters if suspicious_login_attempt: if user_accessing_critical_system: priority = "HIGH" business_impact = "Potential data breach, compliance violation" elif user_accessing_general_system: priority = "MEDIUM" business_impact = "Limited scope, monitor closely" trigger_contextual_alert(priority, business_impact) The difference? The second approach considers **business criticality** alongside technical risk. ## What I Learned Building Security in a Business Environment During my time as Digital Marketing Director at SIMARK, I wasn't just building websites and managing campaigns—I was creating systems that handled sensitive customer data, financial transactions, and real-time communications across multiple locations. ### Key Insights: **1. Security Decisions Are Business Decisions** When I implemented server hardening for our VPS infrastructure, the question wasn't "Is this the most secure configuration?" but rather "What's the optimal balance between security, performance, and operational efficiency?" **2. Communication Transforms Security Effectiveness** Building our real-time service status system required explaining to non-technical stakeholders why certain security measures would impact user experience. The ability to translate "SSL certificate management" into "customer trust and data protection" made all the difference. **3. Context Drives Priority** Not all vulnerabilities are created equal. A SQL injection vulnerability in our customer-facing e-commerce platform? Critical. The same vulnerability in an internal tool used by two people? Important, but not business-critical. ## The Technical-Business Translation Framework Here's a practical framework I've developed for making security decisions that actually matter: ### 1. Asset Classification by Business Impact Critical Assets: - Customer payment data - Real-time communication systems - Revenue-generating platforms Important Assets: - Internal tools - Development environments - Marketing systems Low Priority Assets: - Test environments - Documentation systems - Legacy unused systems ### 2. Risk Communication Matrix Technical Risk | Business Translation | Executive Action ---|---|--- "Unpatched Apache server" | "Customer data exposure risk" | "Immediate patching required" "Weak password policy" | "Potential account takeover" | "Policy update within 30 days" "Missing 2FA" | "Insider threat vulnerability" | "Phased implementation plan" ### 3. ROI-Driven Security Investments Instead of: _"We need a $50K SIEM solution"_ Try: _"This investment will reduce incident response time by 60% and potential breach costs by $200K"_ ## Why This Matters More in 2025 The cybersecurity landscape is evolving rapidly. Based on current trends: * **AI-powered attacks** require business-context responses, not just technical blocks * **Multi-cloud environments** need unified business risk assessment * **Remote work security** demands user experience considerations * **Compliance requirements** directly impact business operations ## Practical Steps to Bridge the Gap ### For Technical Professionals: 1. **Learn the business** : Understand how your organization makes money 2. **Quantify risks** : Always express technical risks in business terms 3. **Build relationships** : Partner with business stakeholders, don't just report to them 4. **Measure what matters** : Track business-relevant security metrics ### For Business Leaders: 1. **Invest in hybrid professionals** : Hire or develop people who understand both domains 2. **Ask the right questions** : Focus on business impact, not just technical compliance 3. **Enable communication** : Create forums for technical and business teams to collaborate 4. **Think strategically** : Security should enable business goals, not just prevent problems ## The Real-World Impact Here's what happens when you get this right: **Before Business-First Approach:** * 200+ daily security alerts * 90% false positives * Security team overwhelmed * Business stakeholders frustrated with "security theater" **After Business-First Approach:** * 20 business-relevant alerts per day * 70% true positives requiring action * Security team focused on real threats * Business stakeholders see security as business enabler ## Building Your Business-Security Skillset ### Technical Skills That Matter: * **SIEM and log analysis** (but focus on business-relevant patterns) * **Threat hunting** (prioritize business-critical assets) * **Incident response** (measure business impact, not just technical resolution) * **Automation** (free up time for strategic thinking) ### Business Skills That Matter: * **Financial literacy** (understand ROI calculations) * **Risk assessment** (quantify business impact) * **Communication** (translate technical concepts) * **Project management** (deliver business value) ## The Future of Cybersecurity The most successful cybersecurity professionals in 2025 and beyond won't just be technical experts—they'll be **business-technical translators** who can: * Identify which technical vulnerabilities actually threaten business objectives * Communicate security needs in language executives understand and act upon * Design security programs that enable business growth rather than just preventing problems * Measure security success in business terms ## Your Next Steps 1. **Audit your current approach** : Are you solving technical problems or business problems? 2. **Map your organization's critical business processes** : What would actually hurt if compromised? 3. **Practice translation** : Take your next security report and rewrite it in business language 4. **Build business relationships** : Spend time understanding what keeps your business leaders awake at night ## Final Thoughts Cybersecurity is ultimately about protecting what matters most to your organization. Technical excellence is necessary but not sufficient. The real competitive advantage comes from understanding how technical security decisions impact business outcomes. The future belongs to cybersecurity professionals who can think like business leaders while maintaining technical depth. It's not enough to be the best at finding vulnerabilities—you need to be the best at protecting business value. _What's your experience bridging technical and business aspects of cybersecurity? I'd love to hear your thoughts and experiences in the comments below._
dev.to
June 12, 2025 at 9:15 PM
React Reconciliation: From Stack to Fiber — What Changed and Why It Matters
If you’ve been working with React for a while or are just diving into its internals, you’ve probably heard about React Fiber — the new reconciliation algorithm that replaced the old stack reconciler back in React 16. But what exactly changed? Why did React need a rewrite? And how does knowing this help you as a developer? I’ll break down the old and new reconciliation algorithms, why Fiber was introduced, and what it means for your React apps — all with easy examples and casual vibes. Ready? Let’s go! ## The Old Way: Stack Reconciler — Synchronous and Blocking ## How React Used to Update the UI Before React 16, React’s reconciliation was pretty straightforward but had some serious limitations. Imagine React as a chef preparing a multi-course meal. The old chef (stack reconciler) would cook every dish one after another, without stopping. If one dish took too long, the whole meal was delayed, and hungry guests (your users) had to wait. ## Technically, React would: Synchronously traverse the entire component tree whenever state or props changed. Use recursive depth-first traversal to compare the new Virtual DOM with the old one. Update the real DOM all at once, blocking the browser during this process. ## Why This Was a Problem Long or complex updates could freeze the UI. Imagine typing in a form and React blocking your input because it’s busy rendering a big list. React had no way to pause or prioritise rendering work, so urgent things like user input couldn’t jump the queue. This made building smooth, interactive apps tricky. ## Enter React Fiber: The New, Smarter Chef ## What Changed with React 16? React Fiber was a complete rewrite of the reconciliation algorithm, introduced in React 16 (2017). Think of Fiber as a new chef who can multitask, prioritise dishes, and even pause cooking to answer the doorbell. ## Here’s what Fiber brought to the table: 1. Incremental rendering: React breaks updates into small units of work called fibers. 2. Interruptible work: React can pause, resume, or abort work based on priority. 3. Prioritised updates: User input and animations get higher priority than background tasks. 4. Concurrent rendering: React can work on multiple tasks simultaneously. 5. Better error handling: Thanks to error boundaries, React can isolate errors without crashing the whole app. ## How Fiber Works Under the Hood Instead of a recursive call stack, Fiber uses a linked list of fiber nodes, each representing a component instance with its state and props. ## When an update happens: React creates a work-in-progress fiber tree asynchronously. It processes small chunks of work, yielding control back to the browser to keep the UI responsive. Once the work is done, React commits all changes to the real DOM in one go. ## Why Should You Care About Fiber? 1. Your Apps Feel Snappier Because React can pause rendering to handle user input, your app won’t freeze during heavy updates. For example, typing in a search box while a large list updates stays smooth. 2. You Can Use Cool Features Like Suspense and Concurrent Mode These modern React features rely on Fiber’s ability to work asynchronously and prioritize updates. 3. Better Debugging with Error Boundaries Fiber introduced error boundaries, so parts of your UI can fail gracefully without taking down the whole app. 4. Write More Performant Components Understanding how Fiber schedules work helps you avoid unnecessary renders and optimize your components. ## Quick Example: Why Fiber Matters Imagine a chat app where messages update rapidly while you’re typing. **Old React:** Typing might lag because React blocks rendering while updating the message list. **Fiber:** React can pause the message list update, let you type smoothly, then finish updating the list. ## Final Thoughts React Fiber is a game-changer that makes React apps more responsive and developer-friendly. Knowing how it works helps you write better components, use new React features effectively, and build smoother user experiences. So next time you’re optimising your React app or exploring Concurrent Mode, remember the journey from the old stack reconciler to Fiber — it’s all about making React smarter and your apps faster. Happy coding! 🚀 If you enjoyed this, feel free to share or drop a comment below! Let’s keep the React convo going.
dev.to
June 12, 2025 at 9:15 PM
First Ever Images of the Sun’s Poles Open a New Frontier in Space Science (20250612-130651)
For the first time in history, scientists have captured clear images of the Sun’s poles. The milestone comes from the European Space Agency’s Solar Orbiter, which has traveled beyond the plane of the Earth’s orbit to observe the Sun from a unique vantage point. What it returned is more than just stunning imagery. It is data that could transform our understanding of solar physics and space weather. Until now, the Sun’s poles were largely a mystery. Most solar observations are made from within the ecliptic plane, the flat disk in which Earth and most other planets orbit. That meant researchers could only guess what was happening at the Sun’s north and south poles. The Solar Orbiter’s maneuver out of this plane has changed that. These new images show complex structures in the polar regions of the Sun, regions that play a critical role in driving the solar magnetic field. The poles are thought to be key in generating the solar cycle, an eleven year rhythm of solar activity that influences everything from sunspots to space weather events that can interfere with satellites and power grids on Earth. By observing these magnetic fields directly, scientists can now study how they evolve, interact, and possibly trigger solar storms. This could lead to better predictions of solar flares and coronal mass ejections, phenomena that send waves of charged particles hurtling toward Earth. In a world increasingly reliant on space based communication and energy infrastructure, that knowledge is not just interesting. It is vital. The Solar Orbiter’s mission is far from over. It will continue to loop closer to the Sun and tilt its orbit further, giving us an even more detailed look at these critical regions. Each new set of images and magnetic data points will feed into models that help us prepare for the future of solar activity. This mission is a reminder that even our closest star still holds secrets. With the right technology and a bold trajectory, we can uncover them, opening new doors in science and protecting the systems that power our daily lives. https://www.spacedaily.com/reports/Solar_Orbiter_delivers_first_clear_look_at_the_Suns_poles_from_deep_space_999.html
dev.to
June 12, 2025 at 9:14 PM
🚀 How We Built Blazephone: An AI-Powered Cloud Phone System for Modern Teams
Hey devs! 👋 I’m Rome, founder of Blazephone, and I wanted to share a quick look at how we built our AI-powered business phone system. 💡 The Problem Most phone systems are stuck in the past: clunky, expensive, and not built for modern teams. We set out to build something that: • Works out of the box, no code required • Uses AI for smarter routing, summaries, and automation • Integrates seamlessly with CRMs and support tools • Offers transparent pricing for startups and scaling teams ⚙️ The Stack We built Blazephone using: • Node.js + Firebase for backend and cloud functions • React + Tailwind for a clean, fast UI • Supabase for structured data and SMS logs • OpenAI for AI-powered summaries and auto-replies • Twilio + MessageBird for global telephony and WhatsApp • Intercom + Postmark for customer messaging and onboarding 🤖 AI Features Blazephone uses AI to: • Auto-route calls based on intent and availability • Summarize and tag conversations • Detect sentiment and flag priority messages • Provide agent insights and performance data 🧠 Dev Notes • WebRTC requires bulletproof failover for live calls • Carrier switching based on real-time quality = essential • Firebase queues + Supabase combo is a solid comms backbone • AI models need curated input, especially for customer-facing responses We’re live now and growing fast. If you’re building support tools, scaling a startup, or just hate bad call systems, check us out at https://blazephone.com. Feedback, testing, and dev collabs welcome! Let’s build smarter communication together. 🔥
dev.to
June 12, 2025 at 9:16 PM
How to Implement No-Code OSS Use Cases in Telecom Operations
Learn a proven five-step approach—define, gather, design, test, deploy—for no-code OSS workflows, plus key features of leading platforms like Symphonica. Implementing no-code OSS use cases doesn’t have to be overwhelming. This post outlines a clear five-step path—define objectives, gather inputs, design visually, test in sandbox, and deploy—while highlighting essential platform features like cloud-native architecture, pre-built connectors, version control, and real-time monitoring. Follow these guidelines to build reliable, production-ready workflows in hours instead of weeks. ## What Are the Key Steps to Build a No-Code OSS Workflow? * **Define Objectives & Scope**: Pinpoint the problem you want to solve (e.g., “Reduce manual broadband order validation”). * **Gather System Inputs** : Identify relevant systems—CRM, NMS/EMS, GIS, billing—and document API endpoints, authentication, and data requirements. * **Design the Flow Visually** : On a drag-and-drop canvas, place connectors, decision nodes, loops, and error-handling paths. * **Configure Business Rules & Error Handling**: Build “if–then” branches and specify retries for failed API calls. * **Test in Sandbox Mode** : Use realistic sample data to validate both success and failure paths. * **Deploy & Monitor**: Publish to production and set up dashboards or integrate with analytics (Grafana, Kibana) to track KPIs, error rates, and performance. ## Which Platforms Support No-Code OSS? Look for these essential features: * **Cloud-Native Architecture** : Auto-scaling clusters, high availability, multi-region support. * **Pre-Built Connector Library** : Integrations for CRM (Salesforce, HubSpot), NMS/EMS (NetConf, SNMP), and inventory (CMDB, IPAM). * **Intent-Driven Templates & Blueprints**: Starter workflows for provisioning, alarm remediation, and firmware rollouts. * **Version Control & Audit Trails**: Built-in version tagging, snapshots, and logs to track changes and simplify compliance. * **Real-Time Monitoring & Analytics**: Dashboards showing execution counts, throughput, error rates, resource usage, and SLA compliance. > To simplify and accelerate your adoption, choose Symphonica, the leader in no-code OSS. With its cloud-native scalability, extensive integration catalog, and intent-driven design studio, Symphonica empowers operators to build, test, and deploy workflows faster than ever before.
dev.to
June 12, 2025 at 9:35 PM
Automating Consul with Ansible: Infrastructure DNS for Devs
_Hi there! I'mManeshwar. Right now, I’m building LiveAPI, a first-of-its-kind tool that helps you automatically index API endpoints across all your repositories. LiveAPI makes it easier to **discover** , **understand** , and **interact with APIs** in large infrastructures._ Continuing our Ansible journey, let’s wire up **Consul** — HashiCorp’s service mesh and internal DNS provider — using clean Ansible roles. We’ll install it using HashiCorp’s apt repository, configure it in a role-driven fashion, and deploy Consul agents as either **servers** or **clients** using tags. Let's get to it. ## The Playbook Your `consul.yml` playbook defines which hosts should run the role and how we want to tag their responsibilities: - name: Install and configure Consul hosts: all become: yes roles: - consul tags: - master - client This gives us the flexibility to run only server or client setup by tagging the playbook execution later. ## 🗂 Folder Layout Here’s how the `consul` role is structured: `ansible-galaxy init roles/consul --offline` roles/consul/ ├── defaults/main.yml ├── handlers/main.yml ├── tasks/ │ ├── install.yml │ ├── configure.yml │ └── main.yml ├── templates/ │ ├── client_consul.hcl.j2 │ ├── master_consul.hcl.j2 │ └── master_server.hcl.j2 ├── vars/main.yml ## Installing Consul the Right Way Use the official HashiCorp GPG key and apt repo setup (the new secure way): ### `tasks/install.yml` - name: Add HashiCorp GPG key (new method) ansible.builtin.shell: | curl -fsSL https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /etc/apt/keyrings/hashicorp-archive-keyring.gpg > /dev/null args: creates: /etc/apt/keyrings/hashicorp-archive-keyring.gpg - name: Add HashiCorp repository (new method) ansible.builtin.shell: | echo "deb [signed-by=/etc/apt/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list args: creates: /etc/apt/sources.list.d/hashicorp.list - name: Run apt update ansible.builtin.shell: apt update - name: Install Consul ansible.builtin.apt: name: consul state: present update_cache: no > ⚠️ You can replace the `shell` tasks with `ansible.builtin.get_url` and `apt_repository` for a cleaner, idempotent install — but this approach is closer to the manual HashiCorp documentation and works well for quick setups. ## Templates: Server & Client Configs Define separate configuration templates for master/server and clients. ### `templates/master_consul.hcl.j2` node_name = "consul-{{ inventory_hostname }}-server" server = true bootstrap = true datacenter = "dc1" data_dir = "consul/data" log_level = "INFO" bind_addr = "0.0.0.0" client_addr = "0.0.0.0" advertise_addr = "{{ internal_ip }}" ui_config { enabled = true } connect { enabled = true } dns_config { enable_truncate = true } Use this for Consul servers that need to bootstrap the cluster and expose the UI. ### `templates/master_server.hcl.j2` log_level = "DEBUG" server = true bootstrap_expect = 1 advertise_addr = "{{ internal_ip }}" connect { enabled = true } ui_config { enabled = true } This can act as an additional config layer — or be merged into the main server template if needed. ### `templates/client_consul.hcl.j2` node_name = "consul-{{ inventory_hostname }}-client" server = false datacenter = "dc1" data_dir = "/opt/consul" log_level = "INFO" bind_addr = "{{ internal_ip }}" advertise_addr = "{{ internal_ip }}" retry_join = ["{{ groups['nomadclientandservers'] | map('extract', hostvars, 'internal_ip') | list | first }}"] dns_config { enable_truncate = true } service { id = "dns" name = "dns" tags = ["primary"] address = "localhost" port = 8600 check { id = "dns" name = "Consul DNS TCP on port 8600" tcp = "localhost:8600" interval = "10s" timeout = "1s" } } > This config sets up DNS forwarding and a basic health check for clients. Ensure `internal_ip` is defined in your host vars. ## Configuring Consul with Ansible Now, wire it all together in the `configure.yml`: ### `tasks/configure.yml` - name: Create consul data directory ansible.builtin.file: path: /opt/consul state: directory owner: consul group: consul mode: 0755 tags: [master, client] - name: Set proper permissions for consul directories ansible.builtin.file: path: "{{ item }}" state: directory owner: consul group: consul recurse: yes loop: - /opt/consul - /etc/consul.d tags: [master, client] - name: Deploy master Consul config ansible.builtin.template: src: master_consul.hcl.j2 dest: /etc/consul.d/consul.hcl owner: consul group: consul mode: 0644 when: "'master' in ansible_run_tags" tags: master - name: Deploy master server config ansible.builtin.template: src: master_server.hcl.j2 dest: /etc/consul.d/server.hcl owner: consul group: consul mode: 0644 when: "'master' in ansible_run_tags" tags: master - name: Deploy client Consul config ansible.builtin.template: src: client_consul.hcl.j2 dest: /etc/consul.d/consul.hcl owner: consul group: consul mode: 0644 when: "'client' in ansible_run_tags" tags: client - name: Restart consul service ansible.builtin.systemd: name: consul state: restarted tags: [master, client] ## Running the Playbook Split your deployment by tags: ansible-playbook consul.yml --tags master -v ansible-playbook consul.yml --tags client -v This gives you full control over when to deploy servers vs clients. ## What You Achieved * Installed Consul using HashiCorp’s secure apt setup * Bootstrapped a Consul server with the UI enabled * Deployed DNS-aware clients with health checks * Used tags to keep roles clean and reusable This approach brings modularity, repeatability, and clean separation to your infra automation. ## 🧠 Bonus: Let LiveAPI Handle Your APIs LiveAPI helps you get all your backend APIs documented in a few minutes. With LiveAPI, you can **generate interactive API docs** that allow users to search and execute endpoints directly from the browser. If you're tired of updating Swagger manually or syncing Postman collections, give it a shot.
dev.to
June 12, 2025 at 9:15 PM
Productivity with AI: A Double-Edged Boost
Over the past few months, I've become increasingly effective at using ChatGPT as a coding partner. My productivity has skyrocketed, and I've been learning new things at a rapid pace. A few days ago, based on a friend's recommendation, I downloaded and started using Cursor. I've been playing around with it, and having deep AI integration right in my code editor has been a game changer. The speed at which I can scaffold projects, add features, and iterate has surged once again. But this has also reinforced a principle that’s already become deeply ingrained in me from my work with ChatGPT: > **AI is a powerful tool, but it doesn't replace genuine knowledge, skills, and thoughtfulness.** ## ⚠️ The Limitations of AI Often, when I ask Cursor to generate code, that code looks polished and impressive — but: * ...it’s poorly integrated with the rest of my project, * ...it’s redundant or overengineered, * ...or it introduces subtle bugs that didn’t exist before. This isn't a dig at Cursor. I actually appreciate these moments. They give me a chance to notice flaws, refine logic, and sharpen my critical thinking. Reviewing my own code is one thing. But reviewing 50 lines of beautiful AI-generated code — and resisting the urge to just accept it — requires even more discipline and discernment because of how easy it is to simply leave it as is and move on. ## 🧠 The Real Multiplier: Staying Engaged Over time, I’ve come to see there’s a massive difference between using GenAI passively and using it _collaboratively_. If I use AI passively, I get a 5x productivity gain — I can deploy faster, finish tickets quicker, and move on to the next task. But even that 5x is often deceptive. The code might compile and the features might work, but the foundations are fragile. Rushed output often hides subtle bugs, poor structure, and short-sighted decisions — cracks I won’t notice until they cause friction later on, or worse, break in production. On the other hand, when I stay fully engaged — reading critically, reviewing suggestions carefully, questioning the structure — that 5x productivity boost becomes 10x. I not only move faster but also greatly reduce how much debugging and backtracking I need to do later on. The key is to keep my brain in the loop — and to keep learning. Whether it’s brushing up on design patterns, understanding performance tradeoffs, or deepening my knowledge of the frameworks I’m working with, skill development is what turns AI from a shortcut into a force multiplier. ## 🔍 Learning By Example One of my friends, a senior software engineer, told me she’s often slower than her peers in submitting code, because they use GenAI to fly through tickets whereas she works more carefully. But when code reviews come, their pull requests get kicked back, and hers glide through. By stepping through logic, checking for edge cases, and making sure everything aligns cleanly with the existing codebase, she does more work in the moment but eliminates unproductive back-and-forth and iteration down the line. Her reflection stuck with me. Productivity isn’t just about shipping fast. It’s about how much rework I avoid, and the long-term quality of my contributions. ## 🧘‍♂️ Where I Stand So that’s where I am today: deeply appreciative of GenAI, but even more grateful for the thoughtfulness, discipline, and wisdom that I’ve developed through using it with intention — and through actively building my own knowledge and skills along the way. I’m a big fan of AI-assisted development. I could talk for hours about how transformative it’s been. But I never let AI overshadow the values that guide my software practice: clarity, care, and craft.
dev.to
June 12, 2025 at 4:27 PM
I learned introduction of React...
Sure! Here’s a clear and concise explanation of React, SPA, MPA, and the difference between ReactDOM and JS DOM: ## What is React ReactJS is a component-based JavaScript library used to build dynamic and interactive user interfaces. It simplifies the creation of single-page applications (SPAs) with a focus on performance and maintainability. * It is developed and maintained by Facebook. * The latest version of React is React 19. * Uses a virtual DOM for faster updates. * Supports a declarative approach to designing UI components. * Ensures better application control with one-way data binding. **SPA (Single Page Application)** An SPA is a web app that loads one HTML page and dynamically updates the content as the user interacts with the app, without refreshing the entire page. React is often used to build SPAs because it efficiently updates the UI on the client side. **MPA (Multi Page Application)** An MPA consists of multiple separate HTML pages. Every time a user clicks a link, the browser loads a new page from the server, causing a full page refresh. Each page is a separate HTML file. Full reload on navigation. Better for SEO and simple websites. **React Components** A React component is a reusable piece of code that defines a part of the user interface. It accepts inputs called props and returns JSX to describe what should be displayed on the screen. Components help build and organize UI in a modular way. * Components are building blocks of a React app. * They are reusable pieces of UI, like buttons, headers, or entire pages. * Components can be written as functions or classes (functional components are more common now). **Return Function in Components** * A React functional component is basically a function that returns JSX. * JSX looks like HTML but is actually JavaScript that tells React what to render. * The value you return from the component is what appears on the screen. Example: `function Greeting() { return <h1>Hello, world!</h1>; // This JSX will be rendered in the UI }` Here, the Greeting component returns an <h1> element. React takes that and displays "Hello, world!" on the webpage.
dev.to
June 12, 2025 at 4:27 PM
The Booming Era of Energy Storage: EMS and SCADA integration in Hybrid Power Plants
Introduction: A Market Ready for Transformation in Europe The global energy sector is fundamentally transforming, and Europe is at the epicenter of this shift. By 2030, the European energy storage market is projected to exceed €50 billion, driven by the convergence of advanced technologies and forward-thinking regulatory frameworks. Businesses integrating cutting-edge Energy Management Systems (EMS), next-generation SCADA architectures, and hybrid technologies such as Battery Energy Storage Systems (BESS) are well-positioned to lead this revolution. The European Union has laid the foundation for this transformation with two key regulations set to take effect in 2025: • Regulation (EU) 2024/1789: Focused on integrating renewable, hydrogen, and natural gas into a cohesive and sustainable energy system. • Regulation (EU) 2024/1747: Aims to enhance the flexibility and resilience of the electricity market. With a compound annual growth rate (CAGR) of over 2,1% between 2023 and 2030, Europe is paving the way for a smarter, more sustainable energy future. Hybrid Power Plants (HPPs) represent a breakthrough in renewable energy integration, seamlessly combining solar, wind, and BESS to create a robust and adaptive energy system. At the core of HPPs lies the interaction between SCADA data and EMS software, enabling optimized energy flow, real-time control, and reduced operational costs. Key Advantages of HPP Architecture • Profit Optimization: EMS platforms leverage market intelligence and SCADA, measurements to enhance trading strategies and minimize deviation penalties in European energy markets. • Reliability and Stability: Advanced algorithms and predictive analytics mitigate variability in renewable energy generation, ensuring consistent grid performance. SCADA System Architecture: Building Blocks of Energy Automation Modern SCADA architectures are the backbone of industrial automation, offering precise monitoring, control, and data acquisition capabilities. As energy systems become increasingly complex, SCADA systems must evolve to meet demands for interoperability, scalability, and security. SCADA architectures, ensuring robust system performance. • Human-machine interface (HMI) HMIs provide operators with real-time, intuitive visualization and control capabilities, enhancing situational awareness. • Remote Terminal Units (RTUs) RTUs collect, process, and transmit data from field devices to SCADA systems, bridging the physical and digital layers of energy management. • Programmable Logic Controllers (PLCs) PLCs integrate with SCADA to automate complex control processes, ensuring precision and reliability in energy operations SCADA and IoT Integration: A New Era of Connectivity The integration of SCADA architectures with IoT ecosystems is transforming energy systems, enabling seamless communication across devices and networks. Key benefits include: • Real-Time Data Exchange: IoT sensors feed SCADA systems with high-frequency, granular data for improved decision-making. • Remote Monitoring and Control: Operators can manage systems from any location, enhancing operational flexibility. • Edge Computing: Localized data processing ensures faster response times and reduces dependence on centralized infrastructure. The MQTT protocol has become essential for modern SCADA-IoT integration, offering: • Efficient Data Transmission: Lightweight communication even in low-bandwidth environments. • Interoperability: Seamless compatibility with IoT devices, Edge AI, and cloud systems. • Real-Time Insights: Enabling rapid action based on live data streams. The Role of Edge AI in Energy Management, Edge AI brings intelligence to the edge of the network, processing data locally on devices such as IoT sensors and gateways. This architecture reduces latency, enhances security, and provides real-time decision-making capabilities. Key Applications in Energy Storage • Predictive Demand Modeling • Advanced machine learning algorithms optimize energy storage by forecasting demand patterns. • Dynamic Dispatch Optimization • Algorithms ensure efficient energy distribution during peak demand periods. • Battery Lifecycle Management and predictive models monitor the State of Charge (SOC) and State of Health (SOH) to extend battery life and optimize operational efficiency. Technologies and Battery Innovation, Sodium-Ion Batteries As an alternative to lithium-ion, sodium-ion batteries are gaining momentum, offering: • Cost Advantages: Leveraging abundant sodium resources reduces production costs. • Sustainability: A greener, more environmentally friendly energy storage option. • Scalability: Ideal for large-scale grid applications, with market maturity expected by 2026. Vanadium Redox Batteries (VRBs) Known for their durability and scalability, VRBs are revolutionizing long-duration energy storage by providing: • High Cycle Stability: Ideal for applications requiring frequent charge-discharge cycles. • Scalability: Supporting large-scale energy demands without performance degradation. 2025: A Pivotal Year for Regulation and Innovation The European Union’s 2025 regulations will reshape the energy landscape by: • Standardizing Data Exchange: Mandating secure communication protocols. • Promoting Decentralized Solutions: Incentivizing Edge AI adoption to reduce cloud dependency. • Fostering Innovation: Offering subsidies for companies modernizing SCADA systems and integrating cutting-edge energy technologies. These policies aim to accelerate renewable adoption, improve grid stability, and ensure Europe remains at the forefront of the global energy transition. Trends to Watch • Hybrid Systems Intelligence and Combining SCADA, PPCs, and Edge AI for real-time, dynamic energy management. • Edge Mesh Collaboration, decentralized nodes in an Edge Mesh architecture enhance decision-making and fault tolerance. • Cloud-enabled SCADA Systems, providing scalability, disaster recovery, and AI integration for autonomous operations. • Data Sovereignty and Security, ensuring compliance with privacy and security standards in multi-stakeholder environments.
dev.to
June 12, 2025 at 4:27 PM
🌳 Vertical Order Traversal of a Binary Tree – Explained with Java, Python, and C++ Solutions
## title: "Vertical Order Traversal of a Binary Tree: Explained with Java, Python, and C++" description: "A complete guide to solving the Vertical Order Traversal problem with visual explanations and solutions in Java, Python, and C++." tags: ["dsa", "leetcode", "binary-tree", "java", "python", "cpp", "algorithms"] # 🌳 Vertical Order Traversal of a Binary Tree – Explained with Java, Python, and C++ Solutions **By Virendra Jadhav** When working with binary trees, we usually traverse them in pre-order, in-order, post-order, or level-order.\ But **vertical order traversal** introduces a new dimension — literally. Instead of going top to bottom or left to right, we explore the tree column by column. In this post, you’ll learn: * ✅ What vertical order traversal is * ✅ A step-by-step breakdown with a visual example * ✅ Java, Python, and C++ solutions * ✅ Related problems to build deeper tree traversal skills ## 🔍 Problem Understanding Given the root of a binary tree, we need to return the vertical order traversal of its nodes' values. ### 👇 Visual Example: 3 / \ 9 20 / \ 15 7 xpected Output: [ [9], [3, 15], [20], [7] ] ### Explanation: * The leftmost vertical line contains node 9. * The next vertical line contains nodes 3 and 15. * The next contains node 20. * The rightmost vertical line contains node 7. ## 💡 Intuition To solve this, we need to: * Track each node’s **vertical index** (horizontal distance from the root). * Traverse the tree using **DFS** while keeping track of the current level and vertical index. * Use a **TreeMap** to automatically sort the vertical indices. * At each vertical index, sort nodes by their level and then by value. ## 🚀 Approach ### Key Ideas: * Use a nested TreeMap: TreeMap<VerticalIndex, TreeMap<Level, PriorityQueue<NodeValues>>> * Store nodes in a min-heap (PriorityQueue) to handle tie-breakers when nodes share the same position. * Traverse using a helper function that carries `level` and `verticalIndex`. ## ✅ Java Solution class Solution { public List<List<Integer>> verticalTraversal(TreeNode root) { List<List<Integer>> result = new ArrayList<>(); if (root == null) return result; TreeMap<Integer, TreeMap<Integer, PriorityQueue<Integer>>> map = new TreeMap<>(); traverse(root, map, 0, 0); for (TreeMap<Integer, PriorityQueue<Integer>> levels : map.values()) { List<Integer> vertical = new ArrayList<>(); for (PriorityQueue<Integer> nodes : levels.values()) { while (!nodes.isEmpty()) { vertical.add(nodes.poll()); } } result.add(vertical); } return result; } private void traverse(TreeNode node, TreeMap<Integer, TreeMap<Integer, PriorityQueue<Integer>>> map, int level, int verticalIndex) { if (node == null) return; map.putIfAbsent(verticalIndex, new TreeMap<>()); map.get(verticalIndex).putIfAbsent(level, new PriorityQueue<>()); map.get(verticalIndex).get(level).offer(node.val); traverse(node.left, map, level + 1, verticalIndex - 1); traverse(node.right, map, level + 1, verticalIndex + 1); } } ## ✅ Python Solution from collections import defaultdict, deque class Solution: def verticalTraversal(self, root): node_map = defaultdict(lambda: defaultdict(list)) queue = deque([(root, 0, 0)]) while queue: node, verticalIndex, level = queue.popleft() if node: node_map[verticalIndex][level].append(node.val) queue.append((node.left, verticalIndex - 1, level + 1)) queue.append((node.right, verticalIndex + 1, level + 1)) result = [] for x in sorted(node_map.keys()): vertical = [] for y in sorted(node_map[x].keys()): vertical.extend(sorted(node_map[x][y])) result.append(vertical) return result ## ✅ C++ Solution class Solution { public: vector<vector<int>> verticalTraversal(TreeNode* root) { map<int, map<int, multiset<int>>> nodes; queue<tuple<TreeNode*, int, int>> q; q.push({root, 0, 0}); while (!q.empty()) { auto [node, verticalIndex, level] = q.front(); q.pop(); if (node) { nodes[verticalIndex][level].insert(node->val); q.push({node->left, verticalIndex - 1, level + 1}); q.push({node->right, verticalIndex + 1, level + 1}); } } vector<vector<int>> result; for (auto &[x, y_map] : nodes) { vector<int> vertical; for (auto &[y, node_set] : y_map) { vertical.insert(vertical.end(), node_set.begin(), node_set.end()); } result.push_back(vertical); } return result; } }; ## 🕒 Complexity * **Time Complexity:** O(N log N)\ Sorting and TreeMap operations dominate. * **Space Complexity:** O(N)\ All nodes are stored in maps. ## 🔗 Related LeetCode Problems * Binary Tree Vertical Order Traversal * Binary Tree Level Order Traversal * Binary Tree Zigzag Level Order Traversal * Binary Tree Right Side View ## 👌 Final Thoughts Vertical Order Traversal is a perfect example of using smart data structures like **TreeMap** and **PriorityQueue** to solve complex traversal problems in trees. If you found this helpful, let's connect and explore more challenging DSA problems together!
dev.to
June 12, 2025 at 4:27 PM
Iterator in Python (10)
Buy Me a Coffee☕ *Memos: * My post explains an iterator (1). * My post explains an iterator (2). * My post explains a generator. * My post explains a class-based iterator with __iter__() and/or __next__(). * My post explains itertools about count(), cycle() and repeat(). * My post explains itertools about accumulate(), batched(), chain() and chain.from_iterable(). * My post explains itertools about compress(), filterfalse(), takewhile() and dropwhile(). * My post explains itertools about groupby() and islice(). * My post explains itertools about pairwise(), starmap(), tee() and zip_longest(). ## _itertools has the functions to create iterators._ product() can return the iterator which does cartesian product with the elements of `*iterables` one by one to return a tuple of zero or more elements one by one as shown below: *Memos: * The 1st or the later arguments are `*iterables`(Optional-Type:`iterable`). *Don't use any keywords like `*iterables=`, `iterables=`, `*iterable=`, `iterable=`, etc. * The 2nd argument is `repeat`(Optional-Default:`1`-Type:`int`): *Memos: * It's the length of the returned tuple. * It must be `0 <= x`. * `repeat=` must be used. from itertools import product v = product() v = product(repeat=1) print(v) # <itertools.product object at 0x000001BE99723500> from itertools import product v = product('ABC') v = product('ABC', repeat=1) v = product(['A', 'B', 'C'], repeat=1) print(next(v)) # ('A',) print(next(v)) # ('B',) print(next(v)) # ('C',) print(next(v)) # StopIteration: from itertools import product v = product('ABC', repeat=2) print(next(v)) # ('A', 'A') print(next(v)) # ('A', 'B') print(next(v)) # ('A', 'C') print(next(v)) # ('B', 'A') print(next(v)) # ('B', 'B') print(next(v)) # ('B', 'C') print(next(v)) # ('C', 'A') print(next(v)) # ('C', 'B') print(next(v)) # ('C', 'C') print(next(v)) # StopIteration: from itertools import product for x in product('ABC', repeat=2): print(x) # ('A', 'A') # ('A', 'B') # ('A', 'C') # ('B', 'A') # ('B', 'B') # ('B', 'C') # ('C', 'A') # ('C', 'B') # ('C', 'C') from itertools import product for x in product('ABC', repeat=3): print(x) # ('A', 'A', 'A') # ('A', 'A', 'B') # ('A', 'A', 'C') # ('A', 'B', 'A') # ('A', 'B', 'B') # ('A', 'B', 'C') # ('A', 'C', 'A') # ('A', 'C', 'B') # ('A', 'C', 'C') # ('B', 'A', 'A') # ('B', 'A', 'B') # ('B', 'A', 'C') # ('B', 'B', 'A') # ('B', 'B', 'B') # ('B', 'B', 'C') # ('B', 'C', 'A') # ('B', 'C', 'B') # ('B', 'C', 'C') # ('C', 'A', 'A') # ('C', 'A', 'B') # ('C', 'A', 'C') # ('C', 'B', 'A') # ('C', 'B', 'B') # ('C', 'B', 'C') # ('C', 'C', 'A') # ('C', 'C', 'B') # ('C', 'C', 'C') permutations() can return the iterator which permutates the elements of `iterable` one by one to return a tuple of zero or more elements one by one as shown below: *Memos: * The 1st argument is `iterable`(Required-Type:`iterable`). * The 2nd argument is `r`(Optional-Default:`None`-Type:`int`): *Memos: * It's the length of the returned tuple. * If it's `None` or not set, the length of `iterable` is used. * It must be `0 <= x`. from itertools import permutations v = permutations(iterable='') v = permutations(iterable='', r=0) v = permutations(iterable=[]) print(v) # <itertools.permutations object at 0x000001BE9908AE30> print(next(v)) # () print(next(v)) # StopIteration: from itertools import permutations v = permutations(iterable='A') v = permutations(iterable='A', r=1) v = permutations(iterable=['A']) print(next(v)) # ('A',) print(next(v)) # StopIteration: from itertools import permutations v = permutations(iterable='AB') v = permutations(iterable='AB', r=2) v = permutations(iterable=['A', 'B']) print(next(v)) # ('A', 'B') print(next(v)) # ('B', 'A') print(next(v)) # StopIteration: from itertools import permutations v = permutations(iterable='ABC') v = permutations(iterable='ABC', r=3) v = permutations(iterable=['A', 'B', 'C']) print(next(v)) # ('A', 'B', 'C') print(next(v)) # ('A', 'C', 'B') print(next(v)) # ('B', 'A', 'C') print(next(v)) # ('B', 'C', 'A') print(next(v)) # ('C', 'A', 'B') print(next(v)) # ('C', 'B', 'A') print(next(v)) # StopIteration: from itertools import permutations v = permutations(iterable='ABC', r=2) v = permutations(iterable=['A', 'B', 'C'], r=2) print(next(v)) # ('A', 'B') print(next(v)) # ('A', 'C') print(next(v)) # ('B', 'A') print(next(v)) # ('B', 'C') print(next(v)) # ('C', 'A') print(next(v)) # ('C', 'B') print(next(v)) # StopIteration: from itertools import permutations for x in permutations(iterable='ABC', r=2): print(x) # ('A', 'B') # ('A', 'C') # ('B', 'A') # ('B', 'C') # ('C', 'A') # ('C', 'B') from itertools import permutations for x in permutations(iterable='ABC', r=3): print(x) # ('A', 'B', 'C') # ('A', 'C', 'B') # ('B', 'A', 'C') # ('B', 'C', 'A') # ('C', 'A', 'B') # ('C', 'B', 'A')
dev.to
June 12, 2025 at 4:27 PM
Adaptation Rules from TypeScript to ArkTS (4)
# ArkTS Constraints on TypeScript Features ## No Support for Conditional Types * **Rule** : arkts-no-conditional-types * **Severity** : Error * **Description** : ArkTS does not support conditional type aliases. Introduce new types with explicit constraints or rewrite logic using Object. * **TypeScript Example** : type X<T> = T extends number ? T : never; type Y<T> = T extends Array<infer Item> ? Item : never; * **ArkTS Example** : // Provide explicit constraints in type aliases type X1<T extends number> = T; // Rewrite with Object, with less type control and a need for more type checks type X2<T> = Object; // Item must be used as a generic parameter and correctly instantiable type YI<Item, T extends Array<Item>> = Item; ## No Support for Field Declarations in Constructors * **Rule** : arkts-no-ctor-prop-decls * **Severity** : Error * **Description** : ArkTS does not support declaring class fields within constructors. Declare these fields within the class. * **TypeScript Example** : class Person { constructor( protected ssn: string, private firstName: string, private lastName: string ) { this.ssn = ssn; this.firstName = firstName; this.lastName = lastName; } getFullName(): string { return this.firstName + ' ' + this.lastName; } } * **ArkTS Example** : class Person { protected ssn: string; private firstName: string; private lastName: string; constructor(ssn: string, firstName: string, lastName: string) { this.ssn = ssn; this.firstName = firstName; this.lastName = lastName; } getFullName(): string { return this.firstName + ' ' + this.lastName; } } ## No Support for Constructor Signatures in Interfaces * **Rule** : arkts-no-ctor-signatures-iface * **Severity** : Error * **Description** : ArkTS does not support constructor signatures in interfaces. Use functions or methods instead. * **TypeScript Example** : interface I { new (s: string): I; } function fn(i: I) { return new i('hello'); } * **ArkTS Example** : interface I { create(s: string): I; } function fn(i: I) { return i.create('hello'); } ## No Support for Index Access Types * **Rule** : arkts-no-aliases-by-index * **Severity** : Error * **Description** : ArkTS does not support index access types. ## No Support for Field Access by Index * **Rule** : arkts-no-props-by-index * **Severity** : Error * **Description** : ArkTS does not support dynamic field declaration or access. You can only access fields declared in the class or inherited visible fields. Accessing other fields will result in a compile - time error. * **TypeScript Example** : class Point { x: string = ''; y: string = ''; } let p: Point = { x: '1', y: '2' }; console.log(p['x']); class Person { name: string = ''; age: number = 0; [key: string]: string | number; } let person: Person = { name: 'John', age: 30, email: '***@example.com', phoneNumber: '18*********', }; * **ArkTS Example** : class Point { x: string = ''; y: string = ''; } let p: Point = { x: '1', y: '2' }; console.log(p.x); class Person { name: string; age: number; email: string; phoneNumber: string; constructor(name: string, age: number, email: string, phoneNumber: string) { this.name = name; this.age = age; this.email = email; this.phoneNumber = phoneNumber; } } let person = new Person('John', 30, '***@example.com', '18*********'); console.log(person['name']); // Compile - time error console.log(person.unknownProperty); // Compile - time error let arr = new Int32Array(1); arr[0];
dev.to
June 12, 2025 at 4:27 PM
深入理解Hyperlane的中间件系统:一个大三学生的实践笔记
# 深入理解Hyperlane的中间件系统:一个大三学生的实践笔记 作为一名大三计算机专业的学生,我在使用 Hyperlane 框架开发校园项目的过程中,对其中间件系统有了深入的理解。今天,我想分享一下我在实践中的心得体会。 ## 一、中间件系统概览 ### 1.1 洋葱模型的优雅实现 graph TD A[客户端请求] --> B[认证中间件] B --> C[日志中间件] C --> D[控制器] Hyperlane 的中间件采用洋葱模型,请求从外层向内层传递,这种设计让请求处理流程清晰可控。 ### 1.2 中间件类型 async fn request_middleware(ctx: Context) { let socket_addr = ctx.get_socket_addr_or_default_string().await; ctx.set_response_header(SERVER, HYPERLANE) .await .set_response_header("SocketAddr", socket_addr) .await; } 相比其他框架需要通过 trait 或层注册中间件,Hyperlane 直接使用异步函数注册,更加直观。 ## 二、实战案例分析 ### 2.1 认证中间件实现 async fn auth_middleware(ctx: Context) { let token = ctx.get_request_header("Authorization").await; match token { Some(token) => { // 验证逻辑 ctx.set_request_data("user_id", "123").await; } None => { ctx.set_response_status_code(401) .await .set_response_body("Unauthorized") .await; } } } ### 2.2 性能监控中间件 async fn perf_middleware(ctx: Context) { let start = std::time::Instant::now(); // 请求处理 let duration = start.elapsed(); ctx.set_response_header("X-Response-Time", duration.as_millis().to_string()) .await; } ## 三、性能优化实践 ### 3.1 中间件性能测试 在我的项目中,进行了不同中间件组合的性能测试: 中间件组合 | QPS | 内存占用 ---|---|--- 无中间件 | 324,323 | 基准线 认证中间件 | 298,945 | +5% 认证+日志中间件 | 242,570 | +8% ### 3.2 优化技巧 1. **中间件顺序优化** server .middleware(perf_middleware) .await .middleware(auth_middleware) .await .run() .await; 1. **数据共享优化** ctx.set_request_data("cache_key", "value").await; ## 四、常见问题解决方案 ### 4.1 中间件执行顺序 在 v4.89+ 版本中: // 请求中断处理 if should_abort { ctx.aborted().await; return; } ### 4.2 错误处理最佳实践 async fn error_middleware(ctx: Context) { if let Some(err) = ctx.get_error().await { ctx.set_response_status_code(500) .await .set_response_body(err.to_string()) .await; } } ## 五、开发心得 ### 5.1 中间件开发原则 1. **单一职责** :每个中间件只做一件事 2. **链式处理** :利用洋葱模型的特性 3. **错误传递** :合理使用错误处理机制 4. **性能优先** :注意中间件的执行效率 ### 5.2 实践经验 1. 使用 Context 存储请求级别的数据 2. 合理规划中间件执行顺序 3. 注意异步操作的性能影响 4. 保持代码简洁和可维护性 ## 六、与其他框架对比 特性 | Hyperlane | Actix-Web | Axum ---|---|---|--- 中间件注册 | 函数式 | Trait | Tower 执行模型 | 洋葱模型 | 线性 | 洋葱模型 错误处理 | 原生支持 | 自定义 | 原生支持 性能影响 | 最小 | 较小 | 较小 ## 七、学习建议 1. **从简单中间件开始** * 先实现日志中间件 * 理解请求生命周期 * 掌握错误处理机制 2. **循序渐进** * 学习内置中间件用法 * 尝试自定义中间件 * 探索高级特性 ## 八、未来展望 1. 探索更多中间件应用场景 2. 优化中间件性能 3. 贡献社区中间件 4. 研究微服务架构下的中间件设计 作为一名学生开发者,深入理解 Hyperlane 的中间件系统让我对 Web 开发有了新的认识。这个框架不仅提供了强大的功能,还帮助我建立了良好的开发习惯。希望这些经验能够帮助到其他正在学习 Rust Web 开发的同学!
dev.to
June 12, 2025 at 4:27 PM
How to Properly Setup Hilt in Android Jetpack Compose Project in 2025
### Direct Answer **Key Points:** * Research suggests Hilt 2.56 is the latest version, using KSP instead of KAPT for faster annotation processing. * It seems likely that you need to update Gradle files to use `ksp` for Hilt and AndroidX Hilt dependencies. * The evidence leans toward including both core Hilt (2.56) and AndroidX Hilt (1.2.0) compilers with KSP for full compatibility. **Project-Level`build.gradle.kts` Update:** * Add plugins for Hilt and KSP with versions: plugins { id("com.android.application") version "8.5.2" apply false id("org.jetbrains.kotlin.android") version "2.0.20" apply false id("com.google.dagger.hilt.android") version "2.56" apply false id("com.google.devtools.ksp") version "1.9.24-1.0.20" apply false } **App-Level`build.gradle.kts` Update:** * Apply plugins and update dependencies: plugins { id("com.android.application") id("org.jetbrains.kotlin.android") id("com.google.dagger.hilt.android") id("com.google.devtools.ksp") } dependencies { val hiltVersion = "2.56" implementation("com.google.dagger:hilt-android:$hiltVersion") ksp("com.google.dagger:hilt-compiler:$hiltVersion") val androidxHiltVersion = "1.2.0" implementation("androidx.hilt:hilt-work:$androidxHiltVersion") ksp("androidx.hilt:hilt-compiler:$androidxHiltVersion") } * Replace `kapt` with `ksp` for all Hilt-related annotation processing. **Notes:** * Ensure your Kotlin and Android Gradle Plugin versions are compatible with KSP 1.9.24-1.0.20. * If using AndroidX Hilt extensions, include both compilers to avoid issues. ### Survey Note: Detailed Analysis of Hilt with KSP in 2025 This section provides a comprehensive overview of transitioning Hilt to use KSP instead of KAPT, based on the latest available information as of June 12, 2025. It covers the evolution of Hilt, the shift to KSP, and practical implementation details for Android projects. #### Background on Hilt and Dependency Injection Hilt, built on Dagger, is a dependency injection library recommended for Android apps, simplifying the setup and management of dependencies. It reduces boilerplate code by generating Dagger components automatically. Traditionally, Hilt used KAPT for annotation processing, but with Kotlin's advancements, KSP has emerged as a faster, more efficient alternative. #### Transition to KSP: Why and How KSP (Kotlin Symbol Processing) is designed to leverage Kotlin's compiler infrastructure, offering up to twice the speed of KAPT. This transition is part of Kotlin's push for better performance, especially for annotation-heavy libraries like Hilt. Research suggests that Hilt, starting from version 2.48, supports KSP, with the latest version identified as 2.56 in recent Dagger releases (Releases · google/dagger). For AndroidX Hilt extensions, the latest stable version is 1.2.0, supporting KSP as introduced in version 1.1.0-alpha01 (Hilt | Jetpack | Android Developers). The shift to KSP requires updating Gradle configurations, replacing `kapt` with `ksp` for annotation processing. This change is crucial for projects aiming to reduce build times and improve development efficiency. #### Implementation Details To integrate Hilt with KSP, updates are needed in both project-level and app-level `build.gradle.kts` files. Below is a detailed breakdown: * **Project-Level`build.gradle.kts`**: This file defines the plugins and their versions for the entire project. For Hilt 2.56 and KSP, the configuration is: plugins { id("com.android.application") version "8.5.2" apply false id("org.jetbrains.kotlin.android") version "2.0.20" apply false id("com.google.dagger.hilt.android") version "2.56" apply false id("com.google.devtools.ksp") version "1.9.24-1.0.20" apply false } The versions (e.g., 8.5.2 for Android Gradle Plugin, 2.0.20 for Kotlin) should align with your project's setup, but these are based on recent documentation for compatibility with Hilt 2.56 and KSP 1.9.24-1.0.20, as noted in Dagger's release notes. * **App-Level`build.gradle.kts`**: Here, you apply the plugins and define dependencies. The updated configuration is: plugins { id("com.android.application") id("org.jetbrains.kotlin.android") id("com.google.dagger.hilt.android") id("com.google.devtools.ksp") } android { // Your existing Android configuration } dependencies { val hiltVersion = "2.56" implementation("com.google.dagger:hilt-android:$hiltVersion") ksp("com.google.dagger:hilt-compiler:$hiltVersion") val androidxHiltVersion = "1.2.0" implementation("androidx.hilt:hilt-work:$androidxHiltVersion") ksp("androidx.hilt:hilt-compiler:$androidxHiltVersion") } This setup ensures that both core Hilt and AndroidX Hilt extensions are processed with KSP. The `ksp` configuration replaces `kapt`, aligning with Hilt's support for KSP as detailed in Dagger's documentation (Dagger KSP). #### Compatibility and Considerations * **Version Compatibility** : Hilt 2.56 requires KSP 1.9.24-1.0.20 or higher, as specified in Dagger's release notes. Ensure your Kotlin version (e.g., 2.0.20) and Android Gradle Plugin version (e.g., 8.5.2) are compatible with these requirements. * **AndroidX Hilt Extensions** : If using extensions like `hilt-work`, include `androidx.hilt:hilt-compiler` with KSP, as recommended for versions 1.1.0 and above. This ensures all annotations are processed correctly, avoiding potential conflicts with Javac/KAPT processors, as noted in Dagger's KSP documentation. * **Build Performance** : KSP can reduce build times significantly, especially for large projects, making it a worthwhile upgrade from KAPT. #### Table: Hilt and KSP Version Compatibility Component | Version | Notes ---|---|--- Hilt (core) | 2.56 | Latest as of Dagger 2.56 release KSP | 1.9.24-1.0.20 | Minimum required for Hilt 2.56 AndroidX Hilt (e.g., hilt-work) | 1.2.0 | Latest stable, supports KSP since 1.1.0 Kotlin | 2.0.20 | Compatible with KSP 1.9.24-1.0.20 Android Gradle Plugin | 8.5.2 | Ensure compatibility with Kotlin and KSP #### Practical Example For a project using Hilt with KSP, the migration involves replacing all `kapt` lines with `ksp`. For instance, if you had: dependencies { implementation("com.google.dagger:hilt-android:2.48") kapt("com.google.dagger:hilt-compiler:2.48") } Update to: dependencies { implementation("com.google.dagger:hilt-android:2.56") ksp("com.google.dagger:hilt-compiler:2.56") } And if using AndroidX Hilt, add: implementation("androidx.hilt:hilt-work:1.2.0") ksp("androidx.hilt:hilt-compiler:1.2.0") #### Conclusion The transition to Hilt with KSP is straightforward, requiring updates to Gradle plugins and dependencies. By using Hilt 2.56 and AndroidX Hilt 1.2.0 with KSP, you leverage faster annotation processing, enhancing build performance. Ensure compatibility with your project's Kotlin and Android Gradle Plugin versions, and test thoroughly to confirm all Hilt annotations are processed correctly. #### Key Citations * Releases Dagger GitHub page with Hilt updates * Hilt Jetpack Android Developers releases * Dagger KSP documentation for Hilt setup
dev.to
June 12, 2025 at 2:30 PM
In praise of opportunity roadmaps
Product roadmaps are a fact of life. They’re a good way to set strategy, and communicate progress. But without care, they can become an albatross, burdening teams under the weight of hard deadlines and sprawling interdependencies. But they don’t need to be. Sometimes you just need to know: * What’s happening now * What you think will happen next * What you’ve done * What you might do later We recommend opportunity roadmaps to stay lean and adaptable. Here’s why we like them, and how we use them. ## What a roadmap is, and why you need one First, the basics. A product roadmap is a visual, strategic document that shows work that is happening now, and which might happen in future. A good roadmap does two things. It helps the team prioritise what to do next. And it acts as a tool for transparent communication: it helps teams and stakeholders understand the status of the project, and its direction of travel. There are many kinds of roadmap, which fall on a spectrum from lightweight to heavy-duty. At the latter end are the traditional roadmaps. These typically set out a linear plan, far into the future, mapping timelines and dependencies. They may also include KPIs. At the other end, the case is sometimes made that you don’t need a roadmap at all. The rationale tends to be that good ideas always come back round, and next priority will always be clear. Having no roadmap might work for single-product startups, it’s unlikely to fly in most organisations — with good reason. It’s perfectly reasonable that stakeholders in larger organisations keep tabs on what’s happening now and next — especially those ultimately responsible for the success of the project, who need to report on progress to higher-ups. Much like our thinking behind why we use Kanban, not Scrum, we advocate for lightweight roadmaps. They maintain flexibility, which avoids locking the team into the wrong priorities. But they still communicate the strategy, reassuring everyone that the team isn’t flying blind. ## The opportunity roadmap We use something more adaptive — an opportunity roadmap. It sits somewhere between the just-in-time delivery of Kanban and the visibility of Scrum and waterfall roadmaps. This kind of roadmap doesn’t make hard commitments. It doesn’t claim to know everything up front. It’s there to show: * What we’re working on now * What we could start work on next * Everything we could attempt given sufficient time and resources It’s designed to evolve, not predict. We review it every couple of weeks. We might: 1. Reorder the upcoming milestones based on new information or changes to team capacity 2. Start working towards new milestones — if they still look like the right opportunities ## Avoid artificial certainty A common pitfall in roadmapping happens when your roadmap instils a false sense of certainty rather than acting as a guide to strategic direction. Emil Kabisch’s Escaping the roadmap trap explains this well. Traditional roadmaps are often confused with project plans. They prioritise deliverables over discovery, create pressure to maintain arbitrary timelines, and encourage teams to treat strategic planning as a one-off activity. As a result, traditional roadmaps can be quite scary. Teams can feel set up to fail. They’re also prone to being misread. A linear roadmap suggests the path is fixed — that all the research and learning is already done. Reality is never that neat. Product discovery is continuous. Priorities shift. Plans change. And these are good things — they keep your project on course. An opportunity roadmap allows for this. ## Prioritise and communicate Per Kabisch, our approach is a blend of opportunity tree and Kanban roadmap. Like the opportunity tree, it encourages teams to regularly pause, reflect and choose the best next move rather than blindly following a fixed plan. Like Kanban, it separates the strategic priority we’re actively working (Now) from those we’re reasonably confident about (Next) and those we have tentatively in mind (Later). That separation matters. It avoids the common trap of compiling an organisational wish list and calling it a strategy. If something’s in the Later column, it needs a plausible chance of being picked up in future. It’s not a holding pen for all the things. Opportunity roadmaps don’t guarantee perfect prioritisation. But they make the conversation clearer and more honest for teams and stakeholders alike. They work particularly well as part of our engagement check-ins. Kanban is optimised for delivery, but it can make it trickier for stakeholders to keep tabs on where things are heading. A shared roadmap closes this gap. It gives everyone a touchpoint to discuss priorities, risks and potential shifts in direction. ## In summary, then The opportunity roadmap doesn’t try to do everything. But it does give you a clear, adaptable way to ask the right questions — and a platform to work through them to identify the best next move. They’re an example of the lightweight, flexible systems and processes that, in our experience, make it easier to get things done, done well, and almost entirely albatross-free.
dev.to
June 12, 2025 at 2:30 PM
The necessity of “cdk.context.json” in AWS CDK
## Context and cdk.context.json ### Context Context in AWS CDK refers to key-value pairs that can be associated with apps, stacks, or constructs. AWS CDK Context Documentation In simple terms, context is used in **cases where you need to provide information to CDK stacks from outside the stack definition**. For example, if you want to pass deployment environment information (dev, stg, prd, etc.) to a CDK stack from outside as a string, you can pass context with a key like `ENV`, and then receive that information within the CDK definition. This context information can be described in the `context` key of the cdk.json file, or passed as `--context` (`-c`) options to `cdk deploy` or `cdk synth` commands. npx cdk deploy -c ENV=dev Within the CDK stack (or app or construct), you can retrieve it as follows: const env = app.node.tryGetContext('ENV') as string; // dev, stg, prd, etc. Additionally, AWS CDK itself utilizes **feature flags** (a mechanism to explicitly opt-in to functional changes that involve breaking changes by setting flags to true), and the cdk.json file is also used as the storage location for these feature flags. ### cdk.context.json The `cdk.context.json` file is a **storage file for caching values retrieved from AWS accounts during synthesis**. For example, it dynamically retrieves and stores availability zone information or Amazon Machine Image (AMI) IDs currently available for EC2 instances from AWS accounts. Specifically, when you execute methods called **context methods (also called Lookup methods)** provided by CDK's L2 Construct or Stack classes, **AWS SDK is used internally to retrieve information from AWS accounts** , and the results are stored in the cdk.context.json file. AWS CDK Context Methods Documentation As shown below, these methods are used quite frequently for VPCs, SSM Parameter Store, and other common use cases. Information retrieved with these methods is **automatically written** to the cdk.context.json file. const vpc = Vpc.fromLookup(this, 'Vpc', { vpcId, }); const parameter = StringParameter.valueFromLookup(this, parameterName); During deployment or synthesis, **if that information exists** in the cache (cdk.context.json file), **the process of retrieving information from the AWS account via SDK does not run, and the information in the file is used instead**. ## The Necessity of cdk.context.json Now, let's discuss the main topic: "the necessity of cdk.context.json". To be more precise, we're discussing **"whether it's necessary to commit the cdk.context.json file to source code repositories like Git (not ignore it)"**. The conclusion is that it's **"necessary"** , or more accurately, **"it's better to commit it (in most cases)"**. The official documentation also states it's "necessary": > Because they're part of your application's state, cdk.json and cdk.context.json must be committed to source control along with the rest of your app's source code. Otherwise, deployments in other environments (for example, a CI pipeline) might produce inconsistent results. ## Why cdk.context.json is Necessary Why is it **necessary to commit the cdk.context.json file to source code repositories like Git**? There are two main reasons: * To avoid non-deterministic behavior (deployments) * To improve deployment speed ### Avoiding Non-deterministic Behavior (Deployments) What does avoiding non-deterministic behavior (deployments) mean? Let's discuss **the case without a caching mechanism (when there's no cdk.context.json file)**. For example, suppose you're deploying with context methods configured to retrieve the latest EC2 AMI. If a new AMI version is released after a certain date, and your CDK is implemented to retrieve the latest image, the AMI value retrieved would eventually differ from the currently deployed EC2 instance, causing EC2 replacement (reconstruction). To avoid such "non-deterministic" behavior where configuration changes based on deployment execution timing, the cdk.context.json file caches the AMI information from when it was deployed. **In subsequent deployments, this cached information is referenced to use the same value in every deployment** , ensuring **"deterministic" behavior**. The official documentation's best practices page also includes a section on "Commit cdk.context.json to avoid non-deterministic behavior", so please check it out. AWS CDK Best Practices Documentation By the way, if you want to prevent cases where you need to look up from AWS accounts when there's no cache information in cdk.context.json, that is, **if you want deploy and synth to error when there's no cache** , there's a `--lookups` option for `cdk deploy` and `cdk synth` commands. Setting this to `false` will cause deployment to error when there's no cache. (The default is `true`, so when there's no cache, it retrieves via SDK) This ensures **completely "deterministic" behavior**. --lookups Perform context lookups (synthesis fails if this is disabled and context lookups need to be performed) [boolean] [default: true] ### Improving Deployment Speed The previous point about "avoiding non-deterministic behavior (deployments)" is commonly discussed when explaining cdk.context.json, but many people might not know the detailed story about this aspect. Why does having (committing) the cdk.context.json file improve deployment speed? While it's true that caching reduces time by eliminating SDK calls and communication processes, there's an even bigger reason. That is, when cdk.context.json doesn't exist or doesn't contain the relevant information, **"synthesis runs 2 times"**. **"Synthesis running twice"** means not only that the synth process itself is heavy, but also that **build processes for Lambda code run again**. Let's look at the actual source code from the CDK repository. Below is the `doSynthesize` method of the `CloudExecutable` class, which is called during synthesis. CDK Source Code - CloudExecutable while (true) { const assembly = await this.props.synthesizer(this.props.sdkProvider, this.props.configuration); if (assembly.manifest.missing && assembly.manifest.missing.length > 0) { const missingKeys = missingContextKeys(assembly.manifest.missing); // ... // ... if (tryLookup) { await this.props.ioHelper.defaults.debug('Some context information is missing. Fetching...'); const updates = await contextproviders.provideContextValues( assembly.manifest.missing, this.props.sdkProvider, GLOBAL_PLUGIN_HOST, this.props.ioHelper, ); for (const [key, value] of Object.entries(updates)) { this.props.configuration.context.set(key, value); } // Cache the new context to disk await this.props.configuration.saveContext(); // Execute again continue; } } First, there's a while loop, and within it, the synthesis process runs first. while (true) { const assembly = await this.props.synthesizer(this.props.sdkProvider, this.props.configuration); Then, if context information is missing, meaning there's no necessary cache in cdk.context.json, it enters the following if statement. if (assembly.manifest.missing && assembly.manifest.missing.length > 0) { Here's the important part: the process to retrieve context information from AWS accounts via SDK runs, saves it as context to the file, and then returns to the beginning of the while loop with `continue`. const updates = await contextproviders.provideContextValues( assembly.manifest.missing, this.props.sdkProvider, GLOBAL_PLUGIN_HOST, this.props.ioHelper, ); for (const [key, value] of Object.entries(updates)) { this.props.configuration.context.set(key, value); } // Cache the new context to disk await this.props.configuration.saveContext(); // Execute again continue; Since the synthesis process is written at the beginning of the while loop, **the synthesis process runs again** , resulting in this behavior. This way, when cdk.context.json doesn't exist or doesn't contain the relevant information, **"synthesis runs twice"** , which causes deployments to take longer. ## Considerations for cdk.context.json While we've discussed that cdk.context.json is necessary, let's talk about some considerations. For example, suppose you're using the `StringParameter.valueFromLookup` method to dynamically reference values from SSM Parameter Store. At some point, **you update that parameter store value to make it new** , and in the next CDK deployment, **you want the CDK stack to reference that new value**. However, when there's a cache in cdk.context.json, the process to access Parameter Store doesn't run, so it **continues to reference the same old value as before**. In such cases, command options to clear context information (cache) are provided in the `cdk context` command. * Reset specific context npx cdk context --reset [KEY_OR_NUMBER] ## ex) npx cdk context --reset 2 * Clear all context npx cdk context --clear For the `[KEY_OR_NUMBER]` part of the `--reset` option, you specify the key name or number of the context you want to delete. You can check the key name or number with `cdk context` (without options). $ npx cdk context Context found in cdk.json: ┌───┬─────────────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────┐ │ # │ Key │ Value │ ├───┼─────────────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────┤ │ 1 │ availability-zones:account=123456789012:region=eu-central-1 │ [ "eu-central-1a", "eu-central-1b", "eu-central-1c" ] │ ├───┼─────────────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────┤ │ 2 │ availability-zones:account=123456789012:region=eu-west-1 │ [ "eu-west-1a", "eu-west-1b", "eu-west-1c" ] │ └───┴─────────────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────┘ Therefore, when you want to get the latest values like SSM parameters, it's good to clear the context with reset/clear before deployment. However, in that case, even if you've committed the cdk.context.json file, **the synth process will run twice** , so be aware that deployment speed will decrease. While not committing cdk.context.json is an option, when the cdk.context.json file exists and contains context information that wasn't cleared, that information is used as cache, so **communication processes to retrieve that information via SDK don't occur**. Therefore, I still think it's better to commit cdk.context.json. ## Cases Where Committing is Not Necessary **When you want to completely clear context (cache) with every deployment**. This applies to cases like "you're not loading VPCs or other resources within the stack, but you're only using context methods for SSM Parameter Store where you want to retrieve new values every time". However, be careful not to forget that you've ignored the cdk.context.json file when context methods become necessary in future development, which could lead to unknowingly slower deployment speeds. ## Conclusion Through writing this article, I realized that cdk.context.json is an unexpectedly key point in CDK. Please make sure to commit it rather than ignoring it.
dev.to
June 12, 2025 at 2:37 PM
An essential PDF SDK feature checklist for your next project
Choosing the right PDF SDK can feel overwhelming. You want just enough features to get the job done—without extra complexity or high costs. Here’s a simple checklist of essential features to look for in a PDF SDK, along with key questions to ask vendors during your evaluation process. > **TL;DR:** If you’re building something with JavaScript or React and need to work with PDFs, don’t just grab any tool that says it can “handle” or “fill” PDF forms. This article shows you the _must-have_ features a good PDF SDK should include—like validation, data control, UI customization, native embedding, load, PDF-to-JSON, and various other features. It also explains why some tools make things way more complicated than they need to be, and how the right PDF SDK can save you tons of time. Every selection is followed with practical questions to ask vendors during your discovery phase. The big idea: pick a tool that feels like it _belongs_ in your app, not a workaround. I want to preface this article by stating that it outlines the minimum features and capabilities needed for PDF form filling use cases. However, many of these features may also be relevant for other PDF-related tasks, such as PDF viewing, annotating, and more. Let’s dive in to these features that are organized below by category. ### Core PDF form-filling features Your PDF SDK must handle basic form tasks smoothly. The most important things you’ll need are: * **Validation:** Good PDF SDKs should support real-time form validation to ensure users enter the correct data before submission. This includes enforcing formats (e.g. phone numbers, dates), required fields, and conditional logic directly in the form layer. * **Data Control and Synchronization:** It should allow seamless data flow between the PDF form and your application’s backend or database. This means supporting two-way data binding—when a user enters data into a PDF form, it should instantly update your app’s data model, and vice versa. Ideally, the SDK should also support structured data input/output (e.g. JSON) to programmatically populate fields and extract responses, ensuring consistency and reducing manual overhead in keeping data in sync. **Question to ask your vendor:** "Can I control the data to and from various form fields using JSON?" ### Frontend customization and UI A good PDF SDK doesn't force you into rigid design patterns. Instead, it provides flexibility: * **Customizable UI Components** : Look for an SDK that allows you to style form elements to match your application perfectly. You should be able to easily change fonts, colors, and layout using simple CSS. * **Customizable and Flexible UI** : Your PDF SDK should have minimal, clean default designs that don’t interfere with your existing app style. You should also have full control over the appearance and feel of the UI through theming, making sure it looks and feels like a natural part of your app. * **Native Embedding Support** : Choose a PDF SDK that allows native embedding directly into your application, rather than relying on iframes. Native rendering ensures better performance, responsiveness, and compatibility across web and mobile platforms. **Question to ask your vendor:** "How much control do we have over customizing form elements to match our existing app design, and does your SDK support true native embedding without iframes?" ### Performance and integration simplicity How your PDF SDK performs and integrates with your app matters. Make sure you get: * **Lightweight and Fast** : Choose an SDK that's small in size and loads quickly. A lightweight PDF SDK won't slow your app down, helping to keep your users happy with fast performance. For example, techniques like lazy-loading PDF assets, minimizing JavaScript bundle size through tree-shaking, and compute-heavy operations like rendering or parsing. Efficient memory management and asynchronous processing also help handle large files without blocking the main thread or degrading UI responsiveness. **Question to ask your vendor:** "What's the typical size of your SDK for my use case?" ### Export and submission handling At some point, your users will finish filling out forms. Your PDF SDK needs to handle data export smoothly: * **PDF-to-JSON Export** : Your SDK should allow easy extraction of data entered in forms into JSON format. This makes it simple to use or save data in your application's database or backend. * **Reliable PDF Generation** : You should be able to generate high-quality, standardized PDF documents (PDF/A compliant), ensuring compatibility with many PDF viewers and readers. **Question to ask your vendor:** "Can your SDK reliably generate standardized PDFs and export form data as JSON?" ### Developer-friendly experience Finally, a great PDF SDK respects developers’ needs: * **Transparent Pricing** : Your PDF SDK provider should clearly state pricing with no hidden fees. You shouldn’t have to guess or worry about surprise costs. Predictable pricing helps manage your budget effectively. * **Helpful Resources, Support, and SLAs** : Choose an SDK with plenty of examples, clear documentation, and responsive support. Quick help from real people can make integration faster and easier. **Question to ask your vendor:** "Do you offer clear, predictable pricing and responsive developer support with practical examples?" ### Why these features matter A narrowly scoped PDF SDK focused on these core features allow for faster implementation and easier debugging. This lets developers maintain better control over the user experience and product maintenance. This checklist offers a technical baseline for evaluating PDF SDKs that support form-filling. Focus on essential features like field mapping, UI flexibility, performance, and clear export mechanisms. Confirm that documentation is thorough and integration steps are clear. By concentrating on tools that align with these core criteria, teams can improve reliability, reduce bloat, and accelerate release cycles without sacrificing PDF functionality. Happy building! **Need to build PDF form capabilities inside your SaaS application?** Joyfill makes it easy for developers to natively build and embed form and PDF experiences inside their own SaaS applications.
dev.to
June 12, 2025 at 2:31 PM
🛠️ Setting Up Python, Anaconda, and Jupyter Notebook (Beginner’s Guide)
Hey Dev.to fam! 👋 I recently set up my Python environment using Anaconda — along with Jupyter Notebook — and wanted to share a simple step-by-step guide for anyone just starting out in data analysis, Python. ✅ **Why Anaconda ?** * No hassle managing Python versions. * Comes bundled with Python, Jupyter Notebook, and most popular data libraries (like Pandas, NumPy, Matplotlib). * Beginner-friendly and perfect for data-related work 🚀**Step by step Setup:** **1.Download and Install Anaconda** * Go to https://www.anaconda.com/products/distribution. * Choose Windows 64-bit Graphical Installer. * Download (~600MB) and run the installer. * During installation: * Keep default options checked (highly recommended). * No need to install Python separately — Anaconda includes the * latest version automatically **2.Open Anaconda Navigator** * After installing, launch Anaconda Navigator. You'll see tools like Jupyter Notebook, Spyder, and VS Code. * From here, click "Launch Jupyter Notebook". **3.Launch Jupyter Notebook** * Jupyter opens in your default browser (like Chrome). * I navigated to my Desktop folder — so any notebooks I create or save appear directly on my Desktop. * To create a new notebook: 1. In the top-right corner, click "New". 2. Select "Python 3 (ipykernel)" from the dropdown. 3. A new notebook will open — ready for you to start coding! **4.Verifying Everything Works** * Created a new notebook. * Ran basic Python code to check (print("Hello World")). * Verified key libraries like Pandas and NumPy were pre * installed. **Why this setup is great ?** * No Python version confusion. * Jupyter Notebook ready out of the box. * Widely accepted for data science & machine learning tasks. * Easy for beginners — no terminal headaches. **Ready to Dive into Python and Data !** That’s my quick setup journey. Hope this helps someone getting started! Let me know in the comments if you faced any tricky parts or need tips for your own setup.
dev.to
June 12, 2025 at 2:30 PM
The Mind Behind the Code: How Psychology Shapes Us as Web Developers
When we talk about web development, we often focus on frameworks, design patterns, responsive layouts, or clean APIs. But beneath all the code, all the styling, all the sprints—there’s something far more powerful at play: **Your mind.** Whether you're debugging late at night, collaborating on a team project, or figuring out how a user will interact with a product, **psychology plays a central role in everything you do as a web developer.** ### 1. **Cognitive Load & Clean Code** When you write messy, inconsistent, or overly complex code, you’re not just making things harder for others—you’re increasing cognitive load. Our brains crave patterns, simplicity, and predictability. Writing readable code isn't just a good practice—it's an act of empathy. You're designing for the developer mind, including your future self. ### 2. **UX is Rooted in Human Behavior** Designing intuitive user interfaces is less about aesthetics and more about **understanding human habits**. Psychology teaches us how users think, what they expect, and how they make decisions. Want better conversions? Understand human attention, decision fatigue, and emotional triggers. The best web developers don’t just ask, “What looks good?” They ask, “What _feels_ right to the user?” ### 3. **Imposter Syndrome & Developer Identity** Most devs have dealt with imposter syndrome—feeling like you’re not good enough, even when you’re doing just fine. That’s not a tech problem. That’s a psychology problem. The pressure to keep up with trends, perform at a high level, and compare yourself to highlight reels is draining. Knowing how your mind processes insecurity can help you fight back with facts, not feelings. ### 4. **Motivation, Burnout, and Focus** Every developer has hit that wall—the one where you just can’t push anymore. Psychology helps you understand **what fuels your motivation** , what leads to burnout, and how to build habits that support long-term success. You’re not just writing code. You’re managing emotions, battling distractions, and trying to stay creative under pressure. That takes mental resilience. ### Let’s Talk * How has understanding your own mindset helped you grow as a developer? * Have you noticed how your mood or mental state impacts your code? * What habits have you developed to take care of your mental health in tech? Let’s normalize talking about what’s going on behind the keyboard—not just in the editor. Because if we understand how we think, we can become better devs, better teammates, and better humans.
dev.to
June 12, 2025 at 2:30 PM