[{"content":"","date":null,"permalink":"/tags/books/","section":"Tags","summary":"","title":"Books"},{"content":"","date":null,"permalink":"/tags/guides/","section":"Tags","summary":"","title":"Guides"},{"content":"Welcome to the February 2025 Link Collection! This month’s edition is packed with exciting open-source projects, essential reads on strategy and leadership, and practical guides on tech and productivity. Whether you’re an engineer, manager, or lifelong learner, there’s something here for you. Let’s dive in!\nProjects #dgtlmoon/changedetection.io #The best and simplest free open source web page change detection, website watcher, restock monitor and notification service. Restock Monitor, change detection. Designed for simplicity.\nRayhunter #Rayhunter is a new open source tool we’ve created that runs off an affordable mobile hotspot that we hope empowers everyone, regardless of technical skill, to help search out cell-site simulators.\nsiyuan-note/siyuan #A privacy-first, self-hosted, fully open source personal knowledge management software, written in typescript and golang.\nglinford/dns-easy-switcher #DNS Easy Switcher for MacOS.\nTeaching a $14 ESP32 to Detect and Auto-Mute TV Ads #A weekend project that uses computer vision to detect and automatically mute TV ads through Sonos speakers. Built with an ESP32, Home Assistant, and some 3D printing.\nBooks #Antifragile: Things That Gain from Disorder #Just as human bones get stronger when subjected to stress and tension, and rumors or riots intensify when someone tries to repress them, many things in life benefit from stress, disorder, volatility, and turmoil. What Taleb has identified and calls “antifragile” is that category of things that not only gain from chaos but need it in order to survive and flourish.\nISBN: 978-0812979688\nBuy on Amazon Good Strategy/Bad Strategy #When Richard Rumelt\u0026rsquo;s Good Strategy/Bad Strategy was published in 2011, it immediately struck a chord, calling out as bad strategy the mish-mash of pop culture, motivational slogans and business buzz speak so often and misleadingly masquerading as the real thing. Since then, his original and pragmatic ideas have won fans around the world and continue to help readers to recognise and avoid the elements of bad strategy and adopt good, action-oriented strategies that honestly acknowledge the challenges being faced and offer straightforward approaches to overcoming them. Strategy should not be equated with ambition, leadership, vision or planning; rather, it is coherent action backed by an argument. For Rumelt, the heart of good strategy is insight into the hidden power in any situation, and into an appropriate response - whether launching a new product, fighting a war or putting a man on the moon. Drawing on examples of the good and the bad from across all sectors and all ages, he shows how this insight can be cultivated with a wide variety of tools that lead to better thinking and better strategy, strategy that cuts through the hype and gets results.\nISBN: 978-1781256176\nBuy on Amazon How to Be Perfect #From the writer and executive producer of the award-winning Netflix series The Good Place that made moral philosophy fun: a foolproof guide to making the correct moral decision in every situation you ever encounter, anywhere on earth, forever *How can we live a more ethical life? This question has plagued people for thousands of years, but it\u0026rsquo;s never been tougher to answer than it is now, thanks to challenges great and small that flood our day-to-day lives and threaten to overwhelm us with impossible decisions and complicated results with unintended consequences. Plus, being anything close to an \u0026rsquo;ethical person\u0026rsquo; requires daily thought and introspection and hard work; we have to think about how we can be good not, you know, once a month, but literally all the time. To make it a little less overwhelming, this fascinating, accessible and funny book by one of our generation\u0026rsquo;s best writers and adept minds in television comedy, Michael Schur, boils down the whole confusing morass with real life dilemmas (from \u0026lsquo;should I punch my friend in the face for no reason?\u0026rsquo; to \u0026lsquo;can I still enjoy great art if it was created by terrible people?\u0026rsquo;), so that we know how to deal with ethical dilemmas.\nISBN: 978-1529421330\nBuy on Amazon Get to the Point! #In this indispensable guide for anyone who must communicate in speech or writing, Schwartzberg shows that most of us fail to convince because we don\u0026rsquo;t have a point-a concrete contention that we can argue, defend, illustrate, and prove. He lays out, step-by-step, how to develop one. In Joel\u0026rsquo;s Schwartzberg\u0026rsquo;s ten-plus years as a strategic communications trainer, the biggest obstacle he\u0026rsquo;s come across-one that connects directly to nervousness, stammering, rambling, and epic fail-is that most speakers and writers don\u0026rsquo;t have a point. They typically have just a title, a theme, a topic, an idea, an assertion, a catchphrase, or even something much less. A point is something more. It\u0026rsquo;s a contention you can propose, argue, defend, illustrate, and prove. A point offers a position of potential value. Global warming is real is not a point. Scientific evidence shows that global warming is a real, human-generated problem that will have a devastating environmental and financial impact is a point. When we have a point, our influence snaps into place. We communicate belief, conviction, and urgency. This book shows you how to identify your point, leverage it, stick to it, and sell it and how to train others to identify and successfully make their own points.\nISBN: 978-1523094110\nBuy on Amazon Life Changing Magic Of Tidying #Despite constant efforts to declutter your home, do papers still accumulate like snowdrifts and clothes pile up like a tangled mess of noodles? Japanese cleaning consultant Marie Kondo takes tidying to a whole new level, promising that if you properly simplify and organize your home once, you’ll never have to do it again. Most methods advocate a room-by-room or little-by-little approach, which doom you to pick away at your piles of stuff forever. The KonMari Method, with its revolutionary category-by-category system, leads to lasting results. In fact, none of Kondo’s clients have lapsed (and she still has a three-month waiting list).\nISBN: 978-1607747307\nBuy on Amazon General #Re-Volt I/O #The popular racing game from \u0026lsquo;99! Download the game, tracks, cars and play online. Join the community to create and share with others!\nHours ∝ Story Points #Are those Scrum poker cards I see?\nTech Hiring Bubble Bursts #The tech job market is shifting - AI, automation, and competition are redefining careers. Learn how engineers can stay relevant in the evolving industry.\nYour brain is full of microplastics: are they harming you? #Plastics have infiltrated every recess of the planet, including your lungs, kidneys and other sensitive organs. Scientists are scrambling to understand their effects on health.\nThe Anti-Ownership Ebook Economy #Pulling back the curtain on the evolution of ebooks offers some clarity to how the shift to digital left ownership behind in the analog world.\nManagement #Improving Team Morale is not an Objective #New managers often see making their team happy as their main objective. Here\u0026rsquo;s the problem with this approach.\nHow to develop capability in your team #Developing capabilities starts with being clear in the distinction between capable and capability - and building the right training from there.\nIn Praise of \u0026ldquo;Normal\u0026rdquo; Engineers #Most of us have encountered a few engineers who seem practically magician-like, a class apart from the rest of us in their ability to reason about complex mental models, leap to non-obvious yet elegant solutions, or emit waves of high quality code at unreal velocity.\nGen Z really are the hardest to work with—even managers of their own generation say they’re difficult #Instead bosses plan to hire more of their millennial counterparts\nA Field Guide to Team Dynamics and Conflict #A practical guide for leaders to recognize and respond to organizational patterns, from harmonious flow to productive conflict, with tools for designing small, meaningful experiments.\nGuides #Introduction - Mintlify Guides #Welcome to our compilation of best practices for writing technical documentation.\nThe Definitive T430 Modding Guide #I’ve been getting requests to create a modification guide for the T430 for over a year now, so this guide is long overdue.\nFiltering spam with GPT4o-mini for $0.00008 per email – Diary of a SysAdmin #I self-host my mail, but I get flooded with spam. I run 4 mail exchangers, all with Postfix + RSpamD. Here’s a look at recently blocked junk on one of my inbound relays:\nThe Scrum Survival Guide: How to Make Scrum Work (Maybe) #Every time I write an article about Scrum, there\u0026rsquo;s always a sizable group telling me I’m doing it wrong.\nUsing the Internet Without Leaving a Trace: a How-To Guide #There may be a time, especially in the upcoming presidency, when you find yourself wanting to use online resources but not leaving any sort of digital “paper trail” behind.\n","date":"28 February 2025","permalink":"/links/link-collection-february-2025/","section":"Links","summary":"The February 2025 Link Collection features a curated roundup of valuable links for tech professionals, engineers, and managers. Explore powerful open-source tools, expert insights on leadership and strategy, and practical guides on productivity, software development, and management. Whether you\u0026rsquo;re looking to enhance your skills or stay ahead of industry trends, this edition has you covered.","title":"Link Collection: February 2025"},{"content":"","date":null,"permalink":"/links/","section":"Links","summary":"","title":"Links"},{"content":"","date":null,"permalink":"/tags/links/","section":"Tags","summary":"","title":"Links"},{"content":"","date":null,"permalink":"/tags/management/","section":"Tags","summary":"","title":"Management"},{"content":"","date":null,"permalink":"/tags/projects/","section":"Tags","summary":"","title":"Projects"},{"content":"","date":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags"},{"content":"","date":null,"permalink":"/tags/tech-roundup/","section":"Tags","summary":"","title":"Tech Roundup"},{"content":"","date":null,"permalink":"/","section":"VirtuallyTD","summary":"","title":"VirtuallyTD"},{"content":"Welcome to the January 2025 edition of the Link Collection! This month’s collection features an exciting mix of open-source tools, thought-provoking books, and technical guides. From leadership strategies to hands-on tutorials, there’s something here for everyone.\nProjects #icloud-photos-downloader/icloud_photos_downloader\nA command-line tool to download photos from iCloud.\nvrtmrz/obsidian-livesync\nContribute to vrtmrz/obsidian-livesync development by creating an account on GitHub.\nhybridgroup/go-haystack\nTrack personal Bluetooth devices via Apple's \u0026quot;Find My\u0026quot; network using OpenHaystack and Macless-Haystack with tools written in Go/TinyGo. No Apple hardware required! - hybridgroup/go-ha\u0026hellip;\nkujov/git-profile-manager\nManage your git profiles with ease. Contribute to kujov/git-profile-manager development by creating an account on GitHub.\nzed-industries/zed\nZed is a high-performance, multiplayer code editor from the creators of Atom and Tree-sitter.\nBooks #The Daily Stoic\nStoic philosophy has long been the secret weapon of history’s greatest and wisest leaders\u0026ndash;from emperors to artists, activists to fighter pilots. Today, people of all stripes are seeking out Stoicism’s unique blend of practicality and wisdom as they look for answers to the great questions of daily life. Where should they start? Epictetus? Marcus Aurelius? Seneca? Which edition? Which translator? Presented in a page-per-day format, this daily resource combines all new translations done by Stephen Hanselman of the greatest passages from the great Stoics (including several lesser known philosophers like Zeno, Cleanthes and Musonius Rufus) with helpful commentary.\nISBN: 978-0735211735\nBuy on Amazon The Manager\u0026rsquo;s Path\nManaging people is difficult wherever you work. But in the tech industry, where management is also a technical discipline, the learning curve can be brutal—especially when there are few tools, texts, and frameworks to help you. In this practical guide, author Camille Fournier (tech lead turned CTO) takes you through each stage in the journey from engineer to technical manager. From mentoring interns to working with senior staff, you’ll get actionable advice for approaching various obstacles in your path. This book is ideal whether you’re a new manager, a mentor, or a more experienced leader looking for fresh advice. Pick up this book and learn how to become a better manager and leader in your organization. Begin by exploring what you expect from a manager Understand what it takes to be a good mentor, and a good tech lead Learn how to manage individual members while remaining focused on the entire team Understand how to manage yourself and avoid common pitfalls that challenge many leaders Manage multiple teams and learn how to manage managers Learn how to build and bootstrap a unifying culture in teams\nISBN: 978-1491973899\nBuy on Amazon Accelerate\nWinner of the Shingo Publication Award Accelerate your organization to win in the marketplace. How can we apply technology to drive business value? For years, we\u0026rsquo;ve been told that the performance of software delivery teams doesn\u0026rsquo;t matter―that it can\u0026rsquo;t provide a competitive advantage to our companies. Through four years of groundbreaking research to include data collected from the State of DevOps reports conducted with Puppet, Dr. Nicole Forsgren, Jez Humble, and Gene Kim set out to find a way to measure software delivery performance―and what drives it―using rigorous statistical methods. This book presents both the findings and the science behind that research, making the information accessible for readers to apply in their own organizations. Readers will discover how to measure the performance of their teams, and what capabilities they should invest in to drive higher performance. This book is ideal for management at every level.\nISBN: 9781942788355\nBuy on Amazon Engineering in Plain Sight\nEngineering in Plain Sight is a beautifully illustrated field guide with accessible explanations to nearly every part of the constructed world around us. Author Grady Hillhouse is the creator behind the popular YouTube channel Practical Engineering (over 3 million subscribers!) and this book is essentially 50 new episodes crammed between two covers. Engineering in Plain Sight extends the field guide genre from natural phenomena to human-made structures, making them approachable and understandable to non-engineers. It transforms readers\u0026rsquo; perspectives of the built environment, converting the act of looking at infrastructure from a mundane inevitability into an everyday diversion and joy. Each section of this accessible, informative book features colorful illustrations revealing the fascinating details of how the human-made world works. An ideal road trip companion, this book offers a fresh perspective on the parts of the environment that often blend into the background. Readers will learn to identify characteristics of the electrical grid, roadways, railways, bridges, tunnels, waterways, and more. Engineering in Plain Sight inspires curiosity, interest, and engagement in how the infrastructure around us is designed and constructed.\nISBN: 978-1718502321\nBuy on Amazon Raising an Entrepreneur\nIn this book, a political powerhouse and mother of two thriving entrepreneurs interviews the moms of over fifty of today’s most successful innovators and—based on her findings—provides ten rules for raising confident, fearless, self-made individuals whose ideas and drive will change the world. Is your child passionate about something? Maybe it’s music, sports, theatre, writing, building things, or helping others—the kind of creative pursuits that create distinguished leaders and make change in the world. All parents want their kids to have success, but how do you help them cultivate their talent and vision for a personally fulfilling and financially successful life? Once you’ve recognized their drive and passion, how do you set your little trailblazers free? Raising an Entrepreneur presents seventy-six stories from the mothers of some of the most successful entrepreneurs today. Entrepreneurs are the new rock stars—they’re the ones who turn their passions into ingenious projects, because they’re willing to risk failure to make their dreams come true. Highlighting the various achievements of innovators from a wide range of cultural and socioeconomic backgrounds—such as Geek Squad’s Robert Stephens and Nantucket Nectars\u0026rsquo; Tom Scott, nonprofit founders like Mama Hope’s Nyla Rodgers and Pencils of Promise\u0026rsquo;s Adam Braun, profit for purpose creators like TOMS Shoes\u0026rsquo; Blake Mycoskie and FEED Projects’ Ellen Gustafson, activists like Mike de la Rocha and Erica Ford, and artists like actress Emmanuelle Chriqui and songwriter Benny Blanco—and with photos of the entrepreneurs as children, these inspirational interviews will provide guidance and support on nurturing your own change maker. Not every kid will be an entrepreneur, but all kids have something that makes them unique. If you’re seeking a way to nurture your children’s passions and help them harness their talent, drive, and grit into a fulfilling life purpose, this book is for you. With these ten rules and numerous inspiring stories, you’ll gain confidence in raising your child into a creatively successful adult.\nISBN: 978-1098377748\nBuy on Amazon General #🌱 My blog is a digital garden, not a blog\nThe phrase \u0026ldquo;digital garden\u0026rdquo; is a metaphor for thinking about writing and creating that focuses less on the resulting \u0026ldquo;showpiece\u0026rdquo; and more on the process, care, and craft it takes to get there.\nOnce It Has Been Trained, Who Will Own My Digital Twin? - The Scholarly Kitchen\nGenerative AI agents have the possibility to make us more productive, but once trained, who will own and control it?\nThe Free Software Foundation is dying\nThe Free Software Foundation is one of the longest-running missions in the free software movement, effectively defining it. It provides a legal foundation for the movement and organizes activism around software freedom.\nThe Missing Bit | I love email, so I rant about it\nI have a deep affection for email and regard it as one of the most crucial components of modern communication, and to some extent, society at large. But email is badly treated. Yes, it has flaws, but it accomplished something unique: an universal way to contact someone, for free, from anywhere on the planet.\nThe Invisible Way You Can Be Tracked Online\nWhat does it take to truly opt out of invasive online tracking, creepy or unwelcome targeted ads, and data collection that you never mean\u0026hellip;\nManagement #Design Your Organization for the Conflicts You Want to Hear About\nOrganization design seems a popular topic these days. Maybe it’s the downturn. Maybe it’s just planning season. But either way, many people are asking me questions about how to design their organizations for 2025 and beyond. Questions like: The argument … Continue reading →\nAmazon Has a Secret Weapon Known as \u0026ldquo;Working Backwards\u0026rdquo;\u0026ndash;and It Will Transform the Way You Work\nOver the past 25 years, Amazon has transformed itself. What began as an online bookseller has become one of the world’s largest retailers. Beyond that, Amazon is the market leader in cloud storage services (AWS), is a major producer of both television and film (Amazon Studios), and has now entered the health care market. Learn how the process works and how it can help you and your business.\nWriting an engineering strategy.\nOnce you become an engineering executive, an invisible timer starts ticking in the background.Tick tick tick. At some point that timer will go off, at which point someone will rush up to you demanding an engineering strategy.\nDeep work. Essentialism in asynchronous culture\nNowadays, we are getting accustomed to working in a continuously interrupting environment. Smartphone notifications, hundreds of e-mails, open spaces, and meetings slicing our workday. We are feeling busy, and overworked, but are we more productive?\nOKR Best Practices\nThis a concise guide on how to start writing your OKRs.\nGuides #Getting gpt-4o-mini to perform like gpt-4o\nWe’d like to share an LLM architectural pattern that we’ve found success with for dividing tasks between large and small language models. For many tasks, it allows us to use smaller foundation models, like gpt-4o-mini, while maintaining gpt-4o levels of capability.\nVim Basics | Chuck Carroll\nVim has been my text editor of choice for a couple of years. Vim (a contraction of vi improved) is a CLI text editor based on vi. It does have a learning curve as it\u0026rsquo;s keyboard driven rather than menus or icons.\nA Gentle Introduction to Using a Vector Database | Steve Kinney\nIn which we learn how to build a simple vector database using Pinecone and OpenAI embeddings, and discover it was way easier than we might have expected.\nUsing a Stream Deck to Control Things – Mike Burke\nHow I am using a Stream Deck, along with some AppleScripts in Keyboard Maestro, to improve my control of Things. Check out the video and find all of the resources in this post.\nBuilding Digital Mind: A Personal Knowledge System\nHow to Create a Living Framework for Capturing, Connecting, and Evolving Your Thoughts\n","date":"31 January 2025","permalink":"/links/link-collection-january-2025/","section":"Links","summary":"The January 2025 Link Collection offers a comprehensive roundup of valuable links for tech enthusiasts and professionals. Explore cutting-edge open-source tools, transformative books on leadership and engineering, and practical guides on productivity and management. This edition is packed with resources to enhance your skills and knowledge in the tech industry.","title":"Link Collection: January 2025"},{"content":"","date":null,"permalink":"/tags/automation/","section":"Tags","summary":"","title":"Automation"},{"content":"","date":null,"permalink":"/tags/bedtime-routine/","section":"Tags","summary":"","title":"Bedtime Routine"},{"content":"","date":null,"permalink":"/tags/hassio/","section":"Tags","summary":"","title":"HASSIO"},{"content":"","date":null,"permalink":"/tags/home-assistant/","section":"Tags","summary":"","title":"Home Assistant"},{"content":"","date":null,"permalink":"/tags/home-automation/","section":"Tags","summary":"","title":"Home Automation"},{"content":"","date":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts"},{"content":"Creating a Smart Bedtime Light Automation for Kids Using Home Assistant #I often find my children turning their lights on during the night when they wake up. I started off by having an automation which checked every hour if the light was on then switch it off then I had the bright idea 💡 that instead of constantly checking and turning off lights, I could create an automation that gradually dims and turns off their bedside lamps automatically after they have been turned on.\nThe Solution #We\u0026rsquo;ll create a Home Assistant automation that:\nDetects when a bedside lamp is turned on during nighttime hours Automatically dims the light to 50% immediately Gradually dims the light over 10 minutes Finally turns the light off completely Prerequisites # Home Assistant installed and running Smart bulbs or lamps that support dimming The lights already configured in Home Assistant The Automation Code #Here\u0026rsquo;s the YAML code for the automation:\nalias: Dim Light Overnight description: \u0026#34;Automatic dimming of bedside lights during night time hours.\u0026#34; triggers: - entity_id: - light.childs_bedside_lamp from: \u0026#34;off\u0026#34; to: \u0026#34;on\u0026#34; trigger: state conditions: - condition: time after: \u0026#34;00:00:00\u0026#34; before: \u0026#34;06:00:00\u0026#34; actions: - data: brightness_pct: 50 action: light.turn_on target: entity_id: - light.childs_bedside_lamp - repeat: count: 10 sequence: - data: entity_id: - light.childs_bedside_lamp brightness_step_pct: -5 action: light.turn_on - delay: minutes: 1 - data: {} action: light.turn_off target: entity_id: - light.childs_bedside_lamp mode: single How It Works #Let\u0026rsquo;s break down each section of the automation:\nTrigger #triggers: - entity_id: - light.childs_bedside_lamp from: \u0026#34;off\u0026#34; to: \u0026#34;on\u0026#34; trigger: state This trigger activates when the specified light changes from off to on.\nConditions #conditions: - condition: time after: \u0026#34;00:00:00\u0026#34; before: \u0026#34;06:00:00\u0026#34; The automation only runs between midnight and 6 AM.\nActions #The automation performs three main actions:\nInitial Dim: - data: brightness_pct: 50 action: light.turn_on Immediately dims the light to 50% brightness when turned on.\nGradual Dimming: - repeat: count: 10 sequence: - data: brightness_step_pct: -5 action: light.turn_on - delay: minutes: 1 Reduces brightness by 5% every minute for 10 minutes.\nFinal Turn Off: - data: {} action: light.turn_off Completely turns off the light after the dimming sequence.\nInstallation # In Home Assistant, navigate to Configuration \u0026gt; Automations Click the + button to create a new automation Click the three dots in the upper right corner and select \u0026ldquo;Edit in YAML\u0026rdquo; Copy and paste the above code Replace light.childs_bedside_lamp with your light entity ID Save the automation Customization Options #You can modify this automation by:\nAdjusting the time window in the conditions Changing the initial brightness percentage Modifying the dimming duration and steps Adding multiple lights to the automation Conclusion #This automation provides a gentle way to help children return to sleep when they wake up at night. Instead of an abrupt light-off experience, the gradual dimming helps ease them back to sleep naturally.\nRemember to test the automation during the day first by temporarily modifying the time condition to make sure it works as expected.\n","date":"7 January 2025","permalink":"/posts/smart-bedtime-light-automation-with-home-assistant/","section":"Posts","summary":"​Discover a practical Home Assistant automation to dim and turn off kids\u0026rsquo; bedside lamps at night. This step-by-step guide includes YAML code, customization tips, and a breakdown of each automation element.","title":"Smart Bedtime Light Automation for Kids with Home Assistant"},{"content":"","date":null,"permalink":"/tags/smart-home/","section":"Tags","summary":"","title":"Smart Home"},{"content":"","date":null,"permalink":"/tags/smart-lights/","section":"Tags","summary":"","title":"Smart Lights"},{"content":"","date":null,"permalink":"/tags/empathy/","section":"Tags","summary":"","title":"Empathy"},{"content":"","date":null,"permalink":"/tags/ethical-behavior/","section":"Tags","summary":"","title":"Ethical Behavior"},{"content":"","date":null,"permalink":"/tags/human-nature/","section":"Tags","summary":"","title":"Human Nature"},{"content":"","date":null,"permalink":"/tags/morality/","section":"Tags","summary":"","title":"Morality"},{"content":"","date":null,"permalink":"/tags/personal-growth/","section":"Tags","summary":"","title":"Personal Growth"},{"content":"","date":null,"permalink":"/tags/psychology/","section":"Tags","summary":"","title":"Psychology"},{"content":"","date":null,"permalink":"/tags/self-reflection/","section":"Tags","summary":"","title":"Self-Reflection"},{"content":"Have you wondered what truly separates a \u0026ldquo;good\u0026rdquo; person from a \u0026ldquo;bad\u0026rdquo; one? Our world often paints morality in black and white, however the reality is far more complex. Could it be that the fundamental difference between good and bad people is not in their actions, but in their ability to self reflect and feel remorse.\nThe Power of Conscience #We have all done things we are not proud of. It is an inevitable part of being human. More importantly is what happens after those moments, that is what defines our moral character. Good people possess the ability to look inward, recognize their mistakes, feel genuine remorse for their actions and deal with the consequences. This emotional response is not just about feeling bad – it\u0026rsquo;s about understanding the impact of our behavior on others and wanting to improve.\nThe Missing Mirror #In contrast, those we might consider \u0026ldquo;bad\u0026rdquo; people often lack the ability for self-reflection. They may commit harmful acts or behaviours without experiencing genuine guilt, they may rationalize their behavior instead of acknowledging its negative impact. It\u0026rsquo;s not that they\u0026rsquo;re inherently evil; rather, they are missing this essential emotional and psychological mechanism that helps guide moral behavior.\nThe Role of Empathy #This capacity for self-reflection is closely tied to empathy. When we can truly feel remorse, it is often because we can imagine and understand how our actions have affected others. Good people don\u0026rsquo;t just recognize their mistakes intellectually, they feel them emotionally through their connection to others.\nGrowth Through Reflection #What makes this idea particularly powerful is its implications for personal growth. The ability to feel remorse is not just about feeling bad, it\u0026rsquo;s a catalyst for change. When we can honestly acknowledge our mistakes and feel genuine remorse, we\u0026rsquo;re more likely to:\nLearn from our experiences Make conscious efforts to change our behavior Develop stronger empathy Make amends when possible Grow as individuals Beyond Simple Labels #This perspective challenges the thought that people are inherently good or bad. Instead, it suggests that moral character is more about our capacity for emotional and psychological growth. It is not about never making mistakes, it\u0026rsquo;s about how we respond to them.\nThe Hope in This Idea #What is particularly encouraging about this view is that it provides hope that there is the potential for change. While some may currently lack the ability for meaningful self-reflection, this capacity can be developed through experience, therapy, and emotional growth. It\u0026rsquo;s not a fixed trait but a skill that can be cultivated.\nFinal Thoughts #Perhaps the next time we are quick to judge someone as \u0026ldquo;good\u0026rdquo; or \u0026ldquo;bad,\u0026rdquo; we should instead consider their capacity for self-reflection and remorse. Are they able to recognize their mistakes? Can they feel genuine remorse? Do they show a desire to grow and change? These questions might give us a more helpful way to understand human nature and morality.\n","date":"2 January 2025","permalink":"/posts/the-missing-mirror/","section":"Posts","summary":"Explore the true essence of morality in \u0026lsquo;The Missing Mirror,\u0026rsquo; where self-reflection and empathy distinguish the good from the bad. Uncover how remorse not only defines moral character but also serves as a catalyst for personal growth and change.","title":"The Missing Mirror"},{"content":"Welcome to the November 2024 edition of the Link Collection! This month’s collection features an exciting mix of open-source tools, thought-provoking books, and technical guides. From leadership strategies to hands-on tutorials, there’s something here for everyone.\nProjects #juanfont/headscale #An open source, self-hosted implementation of the Tailscale control server\nsilverbulletmd/silverbullet #The knowledge tinkerer\u0026rsquo;s notebook. Contribute to silverbulletmd/silverbullet development by creating an account on GitHub.\nsirupsen/zk #Zettelkasten on the command-line 📚 🔍. Contribute to sirupsen/zk development by creating an account on GitHub.\nfoambubble/foam #A personal knowledge management and sharing system for VSCode\nMonica #Monica lets you remember everything about your loved ones.\nBooks #Fake #Robert Kiyosaki has built a legacy around simplifying complex and often-confusing subjects like money and investing. He continues to challenge conventional wisdom and asks the questions that will help readers sift through today\u0026rsquo;s information overload to uncover ways to assess what\u0026rsquo;s real… and what isn\u0026rsquo;t. And use truth and facts as a foundation for taking control of their financial lives. In this new book―that will be released, by chapter, online to invite reader feedback and questions that will be included in the print and e-book editions―Robert fights what’s ‘fake’ and helps readers differentiate between what’s real…and what isn’t.\nISBN: 9781612680842\nBuy on Amazon The Missing README #Key concepts and best practices for new software engineers — stuff critical to your workplace success that you weren’t taught in school. For new software engineers, knowing how to program is only half the battle. You’ll quickly find that many of the skills and processes key to your success are not taught in any school or bootcamp. The Missing README fills in that gap—a distillation of workplace lessons, best practices, and engineering fundamentals that the authors have taught rookie developers at top companies for more than a decade.\nISBN: 9781718501836\nBuy on Amazon Building a Second Brain #“One of my favorite books of the year. It completely reshaped how I think about information and how and why I take notes.” —Daniel Pink, bestselling author of Drive A revolutionary approach to enhancing productivity, creating flow, and vastly increasing your ability to capture, remember, and benefit from the unprecedented amount of information all around us. For the first time in history, we have instantaneous access to the world’s knowledge. There has never been a better time to learn, to contribute, and to improve ourselves. Yet, rather than feeling empowered, we are often left feeling overwhelmed by this constant influx of information. The very knowledge that was supposed to set us free has instead led to the paralyzing stress of believing we’ll never know or remember enough.\nISBN: 9781982167387\nBuy on Amazon The 5 Choices #Every day brings a crushing wave of demands a barrage of texts, emails, interruptions, meetings, phone calls, tweets, blogs not to mention the high pressure demands of our jobs is overwhelming and exhausting. The sheer number of distractions threaten our ability to think clearly and make good decisions. If we react to these stimuli, moving mindlessly from one task to another, we will fail to accomplish the things that matter most in our professional and personal lives. In this book, readers will learn how to make the five fundamental choices that will increase their ability to achieve what matters most to them.\nISBN: 9781476711829\nBuy on Amazon The Pomodoro Technique #Francesco Cirillo developed his famed system for improving productivity as a college student thirty years ago. Using a kitchen timer shaped like a pomodoro (Italian for tomato), Cirillo divided the time he spent working on a project into 25-minute intervals, with 5-minute breaks in between, in order to get more done, without interruptions. By grouping a number of pomodoros together, users can tackle a project of any length, and drastically improve their productivity, enhance their focus, and better achieve their goals.\nISBN: 9781524760700\nBuy on Amazon General #Surgical Reading: How to Read 12 Books at Once #Surgical reading is a process I use when reading non-fiction books. I focus on locating and removing the most valuable pieces of information. This allows me to read many different books across a single topic at once.\nHow I Cured My Procrastination - Learn How To Learn #I went from a C student to an A engineering student while enjoying work more and having 100x more freedom TLDR (because I also hate digging through articles for the thing I clicked for): When I went to college I did extremely poorly sophomore year for numerous reasons, mostly being lazy an\n10 Real-Life Reasons Why SCRUM Fails in Software Development #Discover 10 real-world examples showing why SCRUM can be a hindrance in software development, from daily stand-ups to sprint planning chaos.\nLetters from BBC Television Licensing/intro #From the beginning of 2006, I decided not to renew my television licence. I found that my television viewing consisted almost entirely of tapes of old programmes purchased off Ebay, and that my watching of broadcast television was less than an hour a week. I therefore decided to stop watching broadcast television, and I today spend the £159 saved from the TV licence fee on video tapes and DVDs. It is a good decision; I now pay for what I watch, and not for what I don\u0026rsquo;t watch.\nYou Can\u0026rsquo;t See Me, But I Can Make You Rich #The rise of Faceless Accounts on Instagram\nManagement #Solving staffing challenges with concentric circles #Instead of resolving from the top down, start with the inside out.\nMy 8 Best Techniques for Evaluating Character #These methods have helped me enormously—and can save you much heartache and anxiety\nHow Your Manager\u0026rsquo;s Growth Potential and Willingness to Share Power Affects Your Career Trajectory #Are you fighting for scope with your manager? Is he keeping all the credit? Learn what types of managers are best for career growth and what to do if you\u0026rsquo;re stuck.\nManagement #Ikigai is a Japanese concept that means \u0026ldquo;reason for being\u0026rdquo;. It\u0026rsquo;s a combination of the words \u0026ldquo;iki\u0026rdquo;, meaning \u0026ldquo;life\u0026rdquo;, and \u0026ldquo;gai\u0026rdquo;, meaning \u0026ldquo;worth\u0026rdquo;. Ikigai is similar to the French term \u0026ldquo;raison d\u0026rsquo;etre\u0026rdquo;, which is the most important reason or purpose for someone or something\u0026rsquo;s existence.\nWhy It\u0026rsquo;s Easier to Manage 4 People Than It Is to Manage 1 Person #It’s easier to manage 4 people than it is to manage one person. The primary reason for this is the inherent over-reliance in the relationship between a manager and a single report. Let’s dive deeper.\nGuides #Learn about Red Hat Confidential Virtual Machines #RHEL aims to support the emerging Confidential Virtual Machines (CVM) use-case by enabling the hardware technologies such as AMD SEV-SNP and Intel TDX as well as adding support to the software stack.\nA Simple ELF - The Ivory Tower #The Ivory Tower is a blog about software engineering and development philosophy by Anders Sundman.\nDIY — UniFi Security Surveillance System Setup #In this article, I’ll discuss setting up an enterprise-grade network security surveillance system with UniFi (Ubiquiti), which has…\nHow to configure Borg client on macOS using command-line — Sun Knudsen #How to configure Borg client on macOS using command-line\nHow to build your first web application with Go #How to Build Your First Web Application with Go\n","date":"30 December 2024","permalink":"/links/link-collection-december-2024/","section":"Links","summary":"Discover December 2024\u0026rsquo;s top picks featuring open-source projects, essential reads, and expert guides. Dive into productivity hacks, management tips, and cutting-edge tech insights.","title":"Link Collection: December 2024"},{"content":"Welcome to the November 2024 edition of the Link Collection! This month’s collection features an exciting mix of open-source tools, thought-provoking books, and technical guides. From leadership strategies to hands-on tutorials, there’s something here for everyone.\nProjects #andrearaponi/dito #an advanced reverse proxy server written in Go . Contribute to andrearaponi/dito development by creating an account on GitHub.\nwasi-master/13ft #My own custom 12ft.io replacement. Contribute to wasi-master/13ft development by creating an account on GitHub.\ndocumenso/documenso #The Open Source DocuSign Alternative. Contribute to documenso/documenso development by creating an account on GitHub.\ndebauchee/barrier #Barrier is software that mimics the functionality of a KVM switch, which historically would allow you to use a single keyboard and mouse to control multiple computers by physically turning a dial on the box to switch the machine you\u0026rsquo;re controlling at any given moment. Barrier does this in software\nBooks #Getting Things Done #In today\u0026rsquo;s world, yesterday\u0026rsquo;s methods just don\u0026rsquo;t work. In Getting Things Done, veteran coach and management consultant David Allen shares the breakthrough methods for stress-free performance that he has introduced to tens of thousands of people across the country. Allen\u0026rsquo;s premise is simple: our productivity is directly proportional to our ability to relax. Only when our minds are clear and our thoughts are organized can we achieve effective productivity and unleash our creative potential.\nISBN: 9780142000281\nBuy on Amazon Staff Engineer #At most technology companies, you\u0026rsquo;ll reach Senior Software Engineer, the career level for software engineers, in five to eight years. At that career level, you\u0026rsquo;ll no longer be required to work towards the next promotion, and being promoted beyond it is exceptional rather than expected. At that point your career path will branch, and you have to decide between remaining at your current level, continuing down the path of technical excellence to become a Staff Engineer, or switching into engineering management. Of course, the specific titles vary by company, and you can replace \u0026ldquo;Senior Engineer\u0026rdquo; and \u0026ldquo;Staff Engineer\u0026rdquo; with whatever titles your company prefers.Over the past few years we\u0026rsquo;ve seen a flurry of books unlocking the engineering management career path, like Camille Fournier\u0026rsquo;s The Manager\u0026rsquo;s Path, Julie Zhuo\u0026rsquo;s The Making of a Manager, Lara Hogan\u0026rsquo;s Resilient Management and my own, An Elegant Puzzle.\nISBN: 9781736417911\nBuy on Amazon An Elegant Puzzle #A human-centric guide to solving complex problems in engineering management, from sizing teams to handling technical debt. There’s a saying that people don’t leave companies, they leave managers. Management is a key part of any organization, yet the discipline is often self-taught and unstructured. Getting to the good solutions for complex management challenges can make the difference between fulfillment and frustration for teams—and, ultimately, between the success and failure of companies. Will Larson’s An Elegant Puzzle focuses on the particular challenges of engineering management—from sizing teams to handling technical debt to performing succession planning—and provides a path to the good solutions. Drawing from his experience at Digg, Uber, and Stripe, Larson has developed a thoughtful approach to engineering management for leaders of all levels at companies of all sizes. An Elegant Puzzle balances structured principles and human-centric thinking to help any leader create more effective and rewarding organizations for engineers to thrive in.\nISBN: 9781953953339\nBuy on Amazon Extreme Ownership #Sent to the most violent battlefield in Iraq, Jocko Willink and Leif Babin’s SEAL task unit faced a seemingly impossible mission: help U.S. forces secure Ramadi, a city deemed “all but lost.” In gripping firsthand accounts of heroism, tragic loss, and hard-won victories in SEAL Team Three’s Task Unit Bruiser, they learned that leadership—at every level—is the most important factor in whether a team succeeds or fails.Willink and Babin returned home from deployment and instituted SEAL leadership training that helped forge the next generation of SEAL leaders. After departing the SEAL Teams, they launched Echelon Front, a company that teaches these same leadership principles to businesses and organizations. From promising startups to Fortune 500 companies, Babin and Willink have helped scores of clients across a broad range of industries build their own high-performance teams and dominate their battlefields. Now, detailing the mind-set and principles that enable SEAL units to accomplish the most difficult missions in combat, Extreme Ownership shows how to apply them to any team, family or organization. Each chapter focuses on a specific topic such as Cover and Move, Decentralized Command, and Leading Up the Chain, explaining what they are, why they are important, and how to implement them in any leadership environment. A compelling narrative with powerful instruction and direct application, Extreme Ownership revolutionizes business management and challenges leaders everywhere to fulfill their ultimate purpose: lead and win.\nISBN: 9781466874961\nBuy on Amazon The Almanack of Naval Ravikant #Getting rich is not just about luck; Happiness is not just a trait we are born with. These aspirations may seem out of reach, but building wealth and being happy are skills we can learn. So what are these skills, and how do we learn them? What are the principles that should guide our efforts? What does progress really look like? Naval Ravikant is an entrepreneur, philosopher, and investor who has captivated the world with his principles for building wealth and creating long-term happiness. The Almanack of Naval Ravikant is a collection of Naval’s wisdom and experience from the last ten years, shared as a curation of his most insightful interviews and poignant reflections. This isn’t a how-to book, or a step-by-step gimmick. Instead, through Naval’s own words, you will learn how to walk your own unique path toward a happier, wealthier life.\nISBN: 9781544514208\nBuy on Amazon General #How Big Tech Runs Tech Projects and the Curious Absence of Scrum #A survey of how tech projects run across the industry highlights Scrum being absent from Big Tech. Why is this, and are there takeaways others should take note of?\nThe Revolution Has Begun in the UK #75,000 UK parents have come together to give their kids a smartphone-free childhood\nNautilus Omnibus: Plan Your Day Naturally #A simple time-blocking tool with task auto-advance. Simply write your tasks and events see how they fit in your day.\nHow Do I Prepare My Phone for a Protest? (Updated 2024) – The Markup #Simple steps to take before hitting the streets\nThe Environmental Impact of Cloud Computing #Ana Rodrigues too deleted her Spotify account, citing numerous valid reasons\nManagement #What is Lean Coffee? - an introduction to agenda-less meetings #“Agendas are so 20th century” — Lean Coffee is structured, lightweight meeting format. Participants gather, co-create an agenda, and begin talking.\nThe Steel Man Technique: How To Argue Better And Be More Persuasive #The steel man is the opposite of the straw man. It\u0026rsquo;s being charitable, and building up the best possible form of the argument for the other side.\n25 Habits of Highly Effective Managers #Sharp folks from across the First Round community share the small habits that great managers do, including delivering feedback with care, opening up about failure, and sending praise up the chain.\nYou Can’t Sit Out Office Politics #Office politics aren’t something you can sit out. Most people look down upon them, but the truth is, they are a part of every organization. Office politics are about relationship currency and influence capital — and the power these two things give you or don’t give you.\nGrowing Leaders to Solve the Hardest Problems #In an organization with many teams, problems will arise that span across these teams and require solutions broader than an individual manager’s purview. These types of projects include things like: introducing changes to a quarterly planning process, agreeing on broad architectural changes, rolling out a new project management tool, or making changes to how on-call is managed.\nGuides #Building a home router #Building a home router - 2024 edition\nMake Your Own CDN with NetBSD #Learn how to build a self-hosted CDN using NetBSD, Varnish, and nginx\nThe Grymoire\u0026rsquo;s tutorial on SED #The Grymoire - Tutorial on the SED stream editor.\nChanging /etc/hosts based on network connection #I use my laptop at home, university, and public locations. The IP address I use to connect to a particular resource changes depending on if I’m within the network it’s hosted on or a VPN.\nManaging Secrets with Vault and Consul #The following tutorial details how to set up and use Hashicorp\u0026rsquo;s Vault and Consul projects to securely store and manage secrets.\n","date":"26 November 2024","permalink":"/links/link-collection-november-2024/","section":"Links","summary":"Explore November 2024\u0026rsquo;s selection of projects, books, guides, and management insights. Highlights include open-source tools, leadership strategies, and technical tutorials.","title":"Link Collection: November 2024"},{"content":"Welcome to the April 2024 edition of the Link Collection! This month, we highlight tools for end-to-end encryption, strategies for team building, and more.\nProjects #ente-io/ente #Fully open source, End to End Encrypted alternative to Google Photos and Apple Photos - ente-io/ente\ncantino/mcfly #Fly through your shell history. Great Scott! Contribute to cantino/mcfly development by creating an account on GitHub.\nollama/ollama #Get up and running with Llama 2, Mistral, and other large language models locally. - ollama/ollama: Get up and running with Llama 2, Mistral, and other large language models locally.\nBooks #Slow Down #Why, in our affluent society, do so many people live in poverty, without access to health care, working multiple jobs and are nevertheless unable to make ends meet, with no future prospects, while the planet is burning? In his international bestseller, Kohei Saito argues that while unfettered capitalism is often blamed for inequality and climate change, subsequent calls for “sustainable growth” and a “Green New Deal” are a dangerous compromise.\nISBN: 9781662602368\nBuy on Amazon The Secret Life of Money #The Secret Life of Money leads readers on a fascinating journey to uncover the sources of our monetary desires. By understanding why money has the power to obsess us, we gain the power to end destructive patterns and discover riches of the soul.\nISBN: 9781621538158\nBuy on Amazon The Nice Factor #Nice people want to be liked by everyone. They always afraid of offending so they accommodate other people above themselves and adapt their behaviour to suit what they think other people want. Nice people are people-pleasers but they feel compromised and hard done-by a lot of the time.\nISBN: 9781905745364\nBuy on Amazon General #Simplifying the xz backdoor #Step by step I simplify the beginning of the xz backdoor so there’s no doubt of what it does.\nUnsigned Commits #I’m not going to cryptographically sign my git commits, and you shouldn’t either.\nAtuin - Magical Shell History #Sync, search and backup shell history with Atuin\nManagement #How to Build a High Performing Team - Leadership Garden #Discover how to build high-performing software engineering teams, focusing on synergy, clear goals, and shared vision. Get actionable insights and practical advice.\nLieutenants are the limiting reagent #Why don\u0026rsquo;t software companies ship more products? Why do they move more slowly as they grow? What do we mean when we say \u0026ldquo;this company lacks focus\u0026rdquo;?\nBetter to micromanage than be disengaged. #For a long time, I found the micromanager CEO archetype very frustrating to work with.\nGuides #Using Shortcuts Automations To Remind Me of Coupon Codes #Using Shortcuts Automations To Remind Me of Coupon Codes I use an app called SudShare to do my laundry. I got an email from them the other day with a coupon…\nMakefile tricks for Python projects #I like using Makefiles. They work great both as simple task runners as well as build systems for medium-size projects. This is my starter template for Python projects. Note: This blog post assumes …\nUpdating my website from my iPad! | Daniel Diaz\u0026rsquo;s Website #How I am able to use github codespaces to develop and push updates to my website, from my iPad.\n","date":"25 April 2024","permalink":"/links/link-collection-april-2024/","section":"Links","summary":"Explore April 2024\u0026rsquo;s selection of projects, books, and guides on technology and management. Highlights include end-to-end encryption tools and shell utilities.","title":"Link Collection: April 2024"},{"content":"Welcome to the December 2023 edition of the Link Collection! This month, we feature a range of resources, from tools for Mac users and AWS cost monitoring to thought-provoking books and management strategies.\nProjects # Sloth - Mac app that shows all open files and sockets #Sveinbjörn\u0026rsquo;s personal website. Also some open source software stuff.\nGitHub - mrjackwills/havn: A fast configurable port scanner with reasonable defaults #A fast configurable port scanner with reasonable defaults - GitHub - mrjackwills/havn: A fast configurable port scanner with reasonable defaults\nIntroduction | asdf #Manage multiple runtime versions with a single CLI tool\nBooks # Firestarters #Based on interviews with entrepreneurs and leaders in many walks of life, this self-help book gives readers the tools for finding success in their careers, businesses, organizations, and private lives. What is the difference between those bold enough to pursue their dreams and others who never get comfortable enough to ignite their lives? The doers are \u0026ldquo;Firestarters\u0026rdquo; and, because of them, the world is a much different, and often, better place. This motivational how-to book provides insights into the crucial difference between people who make things happen and those who only think about making an impact. Based on research from many disciplines and interviews with professionals at the top of their fields, Firestarters creates a complete roadmap to achieve personal success and make an impact in the world. The heart of the book features stories about successful entrepreneurs, CEOs, organizational leaders, and forward-looking thinkers from a variety of professions.\nISBN: 9781633883482\nBuy on Amazon\nThe Rational Optimist: How Prosperity Evolves #In a bold and provocative interpretation of economic history, Matt Ridley, the New York Times-bestselling author of Genome and The Red Queen, makes the case for an economics of hope, arguing that the benefits of commerce, technology, innovation, and change what Ridley calls cultural evolution will inevitably increase human prosperity.\nISBN: 9780007374816\nBuy on Amazon\nBillion Dollar Whale #Named a Best Book of 2018 by the Financial Times and Fortune, this \u0026ldquo;thrilling\u0026rdquo; (Bill Gates) New York Times bestseller exposes how a \u0026ldquo;modern Gatsby\u0026rdquo; swindled over $5 billion with the aid of Goldman Sachs in \u0026ldquo;the heist of the century\u0026rdquo; (Axios). Now a #1 international bestseller, Billion Dollar Whale is \u0026ldquo;an epic tale of white-collar crime on a global scale\u0026rdquo; (Publishers Weekly), revealing how a young social climber from Malaysia pulled off one of the biggest heists in history. In 2009, a chubby, mild-mannered graduate of the University of Pennsylvania\u0026rsquo;s Wharton School of Business named Jho Low set in motion a fraud of unprecedented gall and magnitude. One that would come to symbolize the next great threat to the global financial system.\nISBN: 9780316436489\nBuy on Amazon\nGeneral # 10 Years After Snowden: Some Things Are Better, Some We’re Still Fighting For #On May 20, 2013, a young government contractor with an EFF sticker on his laptop disembarked a plane in Hong Kong carrying with him evidence confirming, among other things, that the United States government had been conducting mass surveillance on a global scale. What came next were weeks of\u0026hellip;\nWorld likely to breach 1.5C climate threshold by 2027, scientists warn #UN agency says El Niño and human-induced climate breakdown could combine to push temperatures into ‘uncharted territory’\nMonitor your AWS bill #Nobody likes a surprise bill. Learn some ways to keep your AWS bill under control and avoid that end of the month panic.\nManagement # Measuring an engineering organization. #This is an unedited chapter from O’Reilly’s The Engineering Executive’s Primer. For the past several years, I’ve run a learning circle with engineering executives. The most frequent topic that comes up is career management–what should I do next? The second most frequent topic is measuring engineering teams and organizations–my CEO has asked me to report monthly engineering metrics, what should I actually include in the report? Any discussion about measuring engineering organizations quickly unearths strong opinions.\nHow To Prioritize Tasks #Shipping products is hard. What makes it hard is that typical products involve multiple teams and multiple dependencies. Navigating these challenges is non-trivial. There are technical challenges to overcome, but those are typically not the biggest blockers.\nHow to survive a toxic workplace and how to avoid creating one #Inspired by a two minute video about how the Navy Seals does it\n","date":"12 December 2023","permalink":"/links/link-collection-december-2023/","section":"Links","summary":"","title":"Link Collection: December 2023"},{"content":"","date":null,"permalink":"/tags/linux/","section":"Tags","summary":"","title":"Linux"},{"content":"Step 1: Understanding Patches #A patch is a file that consists of a list of differences between one set of files and another. In software development, patches are used to update code, fix bugs, or add new features.\nStep 2: Install the Patch Tool #Most Linux distributions come with the patch utility pre-installed. If it\u0026rsquo;s not installed, you can install it using your distribution\u0026rsquo;s package manager. For example, on Centos, you would use:\ndnf install patch Step 3: Create a Patch File #To create a patch file between an original file original.c and a modified file modified.c, use the diff command:\ndiff -u original.c modified.c \u0026gt; changes.patch This command creates a file named changes.patch containing the differences.\nStep 4: Apply the Patch #To apply the patch to another copy of the original file:\npatch original.c changes.patch Example: Patching a Simple Program #Original Code (original.c) ##include \u0026lt;stdio.h\u0026gt; int main() { printf(\u0026#34;Hello, world!\\n\u0026#34;); return 0; } Modified Code (modified.c) ##include \u0026lt;stdio.h\u0026gt; int main() { printf(\u0026#34;Hello, Linux World!\\n\u0026#34;); return 0; } Creating the Patch # Save the original and modified codes in original.c and modified.c respectively.\nRun:\ndiff -u original.c modified.c \u0026gt; mypatch.patch Applying the Patch # Have another copy of original.c ready.\nApply the patch:\npatch original.c mypatch.patch ","date":"3 December 2023","permalink":"/posts/linux-patch-management/","section":"Posts","summary":"This tutorial provides a step-by-step guide on how to create and apply patches in Linux, including an example of patching a simple piece of software.","title":"Linux Patch Management Tutorial"},{"content":"","date":null,"permalink":"/tags/patch/","section":"Tags","summary":"","title":"Patch"},{"content":"Managing a web domain can be a hassle, especially if you have a dynamic IP address. A dynamic IP address can change often, which makes it difficult to keep your DNS A record up-to-date. Fortunately, Gandi API provides a simple solution for updating DNS records programmatically.\nIn this tutorial, we\u0026rsquo;ll show you how to use the Gandi API, Docker, and shell scripting to automate the process of updating your DNS A record to reflect your current external IP address.\nFollow-up to Dynamic DNS Using Gandi This tutorial is a follow-up to the Dynamic DNS Using Gandi tutorial, which explains how to update DNS records using the Gandi API. The follow-up tutorial builds on the previous tutorial by demonstrating how to create a Docker container that runs the script as a service. By using Docker, you can package the script and its dependencies into a single container, making it easy to deploy and run on any platform. This approach ensures that the script is always running and updating your DNS records, even in the event of container restarts or system failures. In summary, this tutorial builds on the previous tutorial by demonstrating how to create a Docker container that runs the update_dns.sh script as a service, ensuring that your DNS records are always up-to-date.\nPrerequisites #Before we start, you will need the following:\nA Gandi account with an API key A domain name and a subdomain that you want to update Docker installed on your computer Setting up the environment variables #First, create a .env file with the following environment variables:\nGANDI_API_KEY=\u0026lt;api_key\u0026gt; DOMAIN=example.com SUBDOMAIN=subdomain TTL=300 IPLOOKUP=http://whatismyip.akamai.com/ Replace api_key with your Gandi API key, example.com with your domain name, subdomain with your subdomain, and 300 with your desired TTL value. The IPLOOKUP variable is the URL to check your public IP address. The default value is http://whatismyip.akamai.com/\nCreating the scripts #Now, let\u0026rsquo;s create the scripts that will update the DNS records automatically.\nCreate a start.sh file with the following content:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 #!/bin/sh # Set the log file path LOG_FILE=\u0026#34;/var/log/update_dns.log\u0026#34; # Log when the container starts echo \u0026#34;$(date): Starting container\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; # Run the update_dns script once /bin/sh /usr/local/bin/update_dns.sh # Start the cron daemon crond -L /var/log/cron.log # Tail the logs to keep the container running tail -f /var/log/update_dns.log /var/log/cron.log \u0026amp; # Log when the container stops trap \u0026#34;echo $(date): Stopping container \u0026gt;\u0026gt; $LOG_FILE\u0026#34; EXIT # Wait for the container to stop wait This start.sh script sets up the log file path, logs when the container starts, runs the update_dns.sh script once, starts the crond daemon, tails the logs to keep the container running, logs when the container stops, and waits for the container to stop.\nFinally, create an update_dns.sh file with the following content:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 #!/bin/bash # Set your Gandi API key, domain name, and subdomain GANDI_API_KEY=\u0026#34;$GANDI_API_KEY\u0026#34; DOMAIN=\u0026#34;$DOMAIN\u0026#34; SUBDOMAIN=\u0026#34;$SUBDOMAIN\u0026#34; # Set the TTL value for the DNS A record in seconds (default is 1800 seconds / 30 minutes) TTL=\u0026#34;$TTL\u0026#34; IPLOOKUP=\u0026#34;$IPLOOKUP\u0026#34; # Set the log file path LOG_FILE=\u0026#34;/var/log/update_dns.log\u0026#34; # Get the current external IP address CURRENT_IP=$(curl -s $IPLOOKUP) # Get the IP address and TTL of the DNS A record via the Gandi API DNS_INFO=$(curl -s -H \u0026#34;Authorization: Apikey $GANDI_API_KEY\u0026#34; \\ \u0026#34;https://dns.api.gandi.net/api/v5/domains/$DOMAIN/records/$SUBDOMAIN/A\u0026#34;) # Check if the DNS record exists if [ -z \u0026#34;$DNS_INFO\u0026#34; ]; then # Log an error if the DNS record doesn\u0026#39;t exist echo \u0026#34;$(date): Error: DNS record doesn\u0026#39;t exist\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; exit 1 fi # Extract the DNS IP address and TTL value from the API response DNS_IP=$(echo \u0026#34;$DNS_INFO\u0026#34; | jq -r \u0026#39;.rrset_values[0]\u0026#39;) DNS_TTL=$(echo \u0026#34;$DNS_INFO\u0026#34; | jq -r \u0026#39;.rrset_ttl\u0026#39;) # Check if the DNS IP is empty if [ -z \u0026#34;$DNS_IP\u0026#34; ]; then # Log an error if the DNS IP is empty echo \u0026#34;$(date): Error: DNS IP is empty\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; exit 1 fi # Compare the IP addresses if [ \u0026#34;$CURRENT_IP\u0026#34; != \u0026#34;$DNS_IP\u0026#34; ]; then # Log when there is an IP change echo \u0026#34;$(date): IP address changed from $DNS_IP to $CURRENT_IP\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; # Update the DNS A record via the Gandi API RESPONSE=$(curl -s -o /dev/null -w \u0026#34;%{http_code}\u0026#34; \\ -X PUT -H \u0026#34;Content-Type: application/json\u0026#34; -H \u0026#34;Authorization: Apikey $GANDI_API_KEY\u0026#34; \\ -d \u0026#39;{\u0026#34;rrset_values\u0026#34;: [\u0026#34;\u0026#39;$CURRENT_IP\u0026#39;\u0026#34;], \u0026#34;rrset_ttl\u0026#34;: \u0026#39;$TTL\u0026#39;}\u0026#39; \\ \u0026#34;https://dns.api.gandi.net/api/v5/domains/$DOMAIN/records/$SUBDOMAIN/A\u0026#34;) if [ \u0026#34;$RESPONSE\u0026#34; == \u0026#34;200\u0026#34; ] || [ \u0026#34;$RESPONSE\u0026#34; == \u0026#34;201\u0026#34; ]; then # Log when the DNS record is updated echo \u0026#34;$(date): DNS A record updated to $CURRENT_IP with TTL $TTL seconds\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; else # Log an error if the API request fails echo \u0026#34;$(date): API request failed with status code $RESPONSE\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; fi else # Log when the script is run without any IP change echo \u0026#34;$(date): IP address unchanged at $CURRENT_IP with TTL $DNS_TTL seconds\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; fi This update_dns.sh script sets up the required variables, gets the current external IP address, gets the IP address and TTL of the DNS A record via the Gandi API, checks if the DNS record exists and the DNS IP, compares the IP addresses and updates the DNS A record via the Gandi API if there is an IP change.\nBuilding the Docker container #Now, let\u0026rsquo;s create a Docker container to run our script. Create a Dockerfile with the following content:\n1 2 3 4 5 6 7 8 9 10 11 12 FROM alpine:3.15 RUN apk add --no-cache curl jq COPY update_dns.sh /usr/local/bin/ RUN chmod +x /usr/local/bin/update_dns.sh COPY start.sh /usr/local/bin/ RUN chmod +x /usr/local/bin/start.sh ENTRYPOINT [\u0026#34;/usr/local/bin/start.sh\u0026#34;] CMD [\u0026#34;crond\u0026#34;, \u0026#34;-f\u0026#34;] This Dockerfile uses the alpine:3.15 image, installs curl and jq, copies the update_dns.sh and start.sh scripts to the container, and sets start.sh as the entry point.\nNext, create a docker-compose.yml file with the following content:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 version: \u0026#34;3.9\u0026#34; services: update-dns: build: context: . dockerfile: Dockerfile volumes: - ./crontab.txt:/etc/crontabs/root - ./logs:/var/log - \u0026#34;/etc/timezone:/etc/timezone:ro\u0026#34; - \u0026#34;/etc/localtime:/etc/localtime:ro\u0026#34; env_file: - .env command: [\u0026#34;crond\u0026#34;, \u0026#34;-f\u0026#34;] This docker-compose.yml file defines a service named update-dns that builds the Docker image using the Dockerfile and sets up the required volumes, environment variables, and command to run.\nUsing crontab.txt to schedule tasks #In addition to the scripts and Dockerfile, the docker-compose.yml file in the repository references a file named crontab.txt as a volume. This file is used to schedule tasks using the cron utility.\nThe crontab.txt file in the repository contains the following line:\n*/30 * * * * /bin/sh /usr/local/bin/update_dns.sh This line specifies that the update_dns.sh script should be run every 30 minutes.\nWhen the Docker container is started, the crontab.txt file is mounted as a volume in the container\u0026rsquo;s /etc/crontabs/root directory. The cron daemon reads this file and runs the scheduled tasks at the specified intervals.\nIn summary, the crontab.txt file is used to schedule the execution of the update_dns.sh script every 30 minutes, ensuring that the DNS records are updated regularly.\nRunning the Docker container #To run the Docker container, use the following command:\ndocker-compose up -d This command builds the Docker image, creates a container, and starts the container in detached mode. The -d flag indicates that the container should run in the background.\nYou can build the container separately if you want to by running\ndocker build -t gandi-dyndns . You can check the logs in the /logs/ directory. There are two logs that will be output. They are cron.log and update_dns.log.\nupdate_dns.log contains all the log output from the script and will look something like this:\nWed Mar 22 16:26:10 UTC 2023: Starting container Wed Mar 22 16:27:01 UTC 2023: IP address changed from \u0026lt;old_ip\u0026gt; to \u0026lt;new_ip\u0026gt; Wed Mar 22 16:27:01 UTC 2023: DNS A record updated to \u0026lt;new_ip\u0026gt; with TTL 300 seconds Wed Mar 22 16:28:01 UTC 2023: IP address unchanged at \u0026lt;old_ip\u0026gt; with TTL 300 seconds Wed Mar 22 16:29:00 UTC 2023: IP address unchanged at \u0026lt;old_ip\u0026gt; with TTL 300 seconds Source code #You can find the complete source code for this tutorial on the GitHub repository virtuallytd/gandi-dyndns. The repository contains the Dockerfile, docker-compose.yml, update_dns.sh, start.sh, .env and crontab.txt files used in this tutorial.\nFeel free to fork the repository and modify the code to suit your needs.\nConclusion #In this tutorial, we have learned how to update DNS records automatically using Docker and the Gandi API. We have created a Docker container with the required scripts and environment variables, built the Docker image, and run the container in detached mode. We have also checked the logs to make sure that the scripts are running correctly.\nWith this setup, you can rest assured that your DNS records will be updated automatically, keeping your website online 24/7.\nOriginal Article Dynamic DNS Using Gandi\n","date":"26 March 2023","permalink":"/posts/automate-dynamic-dns-updates-with-gandi-api-and-docker/","section":"Posts","summary":"This tutorial demonstrates how to automate dynamic DNS updates using Gandi API, Docker, and shell scripting. Learn to package the update script into a Docker container for seamless deployment, ensuring uninterrupted DNS updates even with dynamic IP changes.​","title":"Automate Dynamic DNS Updates with Gandi API and Docker"},{"content":"","date":null,"permalink":"/tags/dns/","section":"Tags","summary":"","title":"Dns"},{"content":"","date":null,"permalink":"/tags/dyndns/","section":"Tags","summary":"","title":"Dyndns"},{"content":"","date":null,"permalink":"/tags/selfhosting/","section":"Tags","summary":"","title":"Selfhosting"},{"content":"Welcome to the January 2023 edition of the Link Collection! This month’s compilation features productivity tools, insightful projects, helpful guides, and books to inspire and inform.\nGeneral # GPT-3 Is the Best Journal I’ve Ever Used\nFor the past few weeks, I’ve been using GPT-3 to help me with personal development. I wanted to see if it could help me understand issues in my life better, pull out patterns in my thinking, help me bring more gratitude into my life, and clarify my values.\nUses This\nUses This is a collection of nerdy interviews asking people from all walks of life what they use to get the job done.\nYou May Be Early, but You\u0026rsquo;re Not Wrong: A Covid Reading List\nYesterday, I came across a somber tweet by a man who’s trying to protect his family from Covid. He said, “my wife has been speaking with the principal of my children’s elementary school and that he has been advising her to file for divorce because I was clearly not well and ‘my life revolves around fear.’”\nProjects # Scraping Information From LinkedIn Into CSV using Python\nIn this post, we are going to scrape data from Linkedin using Python and a Web Scraping Tool. We are going to extract Company Name, Website, Industry, Company Size, Number of employees, Headquarters Address, and Specialties.\nAnna’s Archive\nAnna’s Archive is a project that aims to catalog all the books in existence, by aggregating data from various sources. We also track humanity’s progress toward making all these books easily available in digital form, through “shadow libraries”.\nHow to use Raycast and how it compares to Spotlight and Alfred\nMost Mac users find Spotlight, Apple’s built-in tool for searching through apps and files, to suit their needs just fine. But power users who want to have near total control over their computer (as well as access to shortcuts and tools) have often looked for other alternatives. Lately, an app called Raycast has been gaining attention as one of those options, competing with one of the community’s long-standing favorites, Alfred.\nGuides # Build a Tiny Certificate Authority For Your Homelab\nIn this tutorial, we’re going to build a tiny, standalone, online Certificate Authority (CA) that will mint TLS certificates and is secured with a YubiKey. It will be an internal ACME server on our local network (ACME is the same protocol used by Let’s Encrypt). The YubiKey will securely store the CA private keys and sign certificates, acting as a cheap alternative to a Hardware Security Module (HSM). We’ll also use an open-source True Random Number Generator, called Infinite Noise TRNG, to spice up the Linux entropy pool.\nSSH - run script or command at login\nThere a multiple use cases to run a script on login. Configuration, starting services, logging, sending a notification, and so on. I want to show you different ways to do so.\nDryer Notifications with Home Assistant GUI only\nDryer notifications using Tasmota with Home Assistant autodiscovery and automation triggers plus an energy cost calculation sensor and dryer state sensor.\nManagement # Awesome CTO\nA curated and opinionated list of resources for Chief Technology Officers and VP R\u0026amp;D, with the emphasis on startups and hyper-growth companies.\nTact Filter\nI came up with this idea several years ago in a conversation with a friend at MIT, who was regularly finding herself upset by other people who worked in her lab. The analogy worked so well in helping her to understand her co-workers that I decided to write it up and put it on the web. I\u0026rsquo;ve gotten quite a few email messages since then from other people who have also found it helpful.\nAn Exact Breakdown of How One CEO Spent His First Two Years of Company-Building\nPeople often wonder how startup CEOs spend their time. Well, I’m a bit obsessive, and I track every 15-minute increment of how I spend my time and I’ve been doing so religiously for years. A little background — as a four-time founder, I\u0026rsquo;ve historically been on the technical side of the companies, either as an individual contributor or leading engineering teams. My role as CEO of Levels is my first non-technical role.\nBooks # Essentialism: The Disciplined Pursuit of Less\nThe Way of the Essentialist involves doing less, but better, so you can make the highest possible contribution. Purchase from Amazon.de\nWhen They Win You Win\nWe don’t need another person’s opinion about what it means to be a great manager. We need to learn to lead in a way that measurably and predictably delivers more engaged employees and better business results. Purchase from Amazon.de\nSpare\nIt was one of the most searing images of the twentieth century: two young boys, two princes, walking behind their mother’s coffin as the world watched in sorrow—and horror. As Princess Diana was laid to rest, billions wondered what Prince William and Prince Harry must be thinking and feeling and how their lives would play out from that point on. Purchase from Amazon.de\n","date":"22 January 2023","permalink":"/links/link-collection-january-2023/","section":"Links","summary":"Explore January 2023\u0026rsquo;s selection of links, including projects, guides, books, and management insights. A must-read for tech enthusiasts and professionals.","title":"Link Collection: January 2023"},{"content":"","date":null,"permalink":"/tags/documentation/","section":"Tags","summary":"","title":"Documentation"},{"content":"When writing documentation it is a good practice to not use public/valid domain names or IP addresses. The RFC documents listed below provide domains and ips that can be used for examples or documentation purposes.\nDomains #Domains reserved for documentation are described in\nRFC2606 - Reserved Top Level DNS Names RFC6761 - Special-Use Domain Names. Top level domain names reserved for documentation:\n.test // for testing .example // for examples .invalid // obviously for invalid domain names .localhost // only pointing to the loop back IP address Second level domain names reserved for documentation:\nexample.com example.net example.org IPv4 #IPv4 addresses reserved for documentation are described in\nRFC1918 - Address Allocation for Private Internets RFC6598 - IANA-Reserved IPv4 Prefix for Shared Address Space RFC6890 - Special-Purpose IP Address Registries RFC8190 - Updates to the Special-Purpose IP Address Registries and obsolete\nRFC3330 - Special-Use IPv4 Addresses RFC5735 - Special Use IPv4 Addresses IPv4 documentation only network block is 192.0.2.0/24\nAddress space:\n10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) IPv6 #IPv6 addresses reserved for documentation are described in\nRFC3849 - IPv6 Address Prefix Reserved for Documentation. IPv4 documentation only network block is 2001:DB8::/32\n","date":"5 September 2021","permalink":"/posts/domain-names-and-ips-for-documentation/","section":"Posts","summary":"Discover the reserved domain names and IP addresses recommended for technical documentation as outlined in RFC standards. Learn about compliant examples for domain names, IPv4, and IPv6 addresses to create accurate and standardized documentation.​","title":"Domain Names and IPs for Documentation"},{"content":"","date":null,"permalink":"/tags/network/","section":"Tags","summary":"","title":"Network"},{"content":"","date":null,"permalink":"/tags/technical/","section":"Tags","summary":"","title":"Technical"},{"content":"Introduction #In today\u0026rsquo;s interconnected world, efficient network configuration is key. This guide focuses on a specific aspect of network configuration on macOS: setting up DNS routing for specific domains. Ideal for those who use VPNs and wish to maintain optimal network configuration, this guide will walk you through the process step-by-step.\nOverview: Custom DNS Configuration for Specific Domains on macOS #I have been looking into a solution for using specific DNS servers for certain internal sudomains. These DNS servers are only available via VPN.\nI don\u0026rsquo;t want all my queries to go trough this internal DNS resolver, because the my usual resolver blocks ads and trackers.\nThe Effective Solution: to specify the resolver to use for a specific domain, create a file named after the domain in /etc/resolver/ and add the nameservers.\nStep-by-Step Configuration Guide #Step 1: Verify the Existence of /etc/resolver/ Directory #It\u0026rsquo;s essential to first ensure that the required directory exists on your system. This directory will hold your custom DNS configurations. First make sure the /etc/resolver/ directory exists\nmacbook:~ user$ sudo mkdir /etc/resolver/ Step 2: Creating a Domain-Specific Configuration File #Once you have confirmed the existence of the directory, the next step involves creating a file that is specific to the domain you want to configure. Create the domain file\nmacbook:~ user$ sudo vi /etc/resolver/example.com Step 3: Adding Nameservers to Your Domain File #After creating the domain-specific file, the crucial part is to add the nameservers. This determines where your DNS queries for the domain are sent. Add the nameservers to the file you just created\nmacbook:~ user$ cat /etc/resolver/example.com nameserver 192.0.2.100 Now, all queries for example.com will be resolved by 192.0.2.100.\nThe caveat with this technique is that tools like dig won\u0026rsquo;t actually resolve domains like apps and will bypass this.\nTesting Your DNS Configuration #After setting up your DNS configurations, it\u0026rsquo;s vital to test and ensure that they are working as expected.\nVerifying Configuration with \u0026lsquo;scutil \u0026ndash;dns\u0026rsquo; #A reliable way to test your configuration is by using the scutil --dns command.\nUsing \u0026lsquo;scutil \u0026ndash;dns\u0026rsquo; for Verification #Use the scutil --dns Command to Verify Configuration:\nmacbook:~ user$ scutil --dns resolver #8 domain : example.com nameserver[0] : 192.0.2.100 flags : Request A records, Request AAAA records reach : 0x00000002 (Reachable) Frequently Asked Questions #Q1: Why is custom DNS routing important on macOS?\nA: Custom DNS routing allows for more control over network traffic, particularly useful in professional settings or when using VPNs.\nQ2: Can this setup improve network security?\nA: Yes, by directing DNS queries through specific servers, you can enhance security and privacy.\nQ3: What if I encounter errors during configuration?\nA: Ensure you have admin rights and that you\u0026rsquo;re entering commands correctly. For specific issues, consult online forums or Apple support.\nConclusion\nCustom DNS routing on macOS can significantly improve your network performance, especially when dealing with internal domains over VPNs. This guide aims to simplify the process, making it accessible even to those with limited networking experience.\n","date":"26 March 2021","permalink":"/posts/macos-dns-routing-by-domain/","section":"Posts","summary":"Learn how to configure different nameservers for specific domains on macOS for optimized network performance","title":"How to Set Up DNS Routing by Domain on macOS"},{"content":"","date":null,"permalink":"/tags/mac/","section":"Tags","summary":"","title":"Mac"},{"content":"Welcome to the February 2021 edition of the Link Collection! This month, we explore tools and strategies for improving remote work, innovative tech projects, and management insights for leaders.\nGeneral # 20 Future Technologies That Will Change the World by 2050\nI recently shared an article called “The “Next Big Thing” in Technology : 20 Inventions That Will Change the World”, which got a few dozen thousand hits in the past couple of weeks. This calls for a sequel.\nWork Lessons from the Pandemic\nI’ve been thinking a lot about what changes in my work I’d like to keep, post-pandemic (can we even talk about a post-pandemic world? It still feels pretty far off). I’m trying to be deliberate and actionable about it.\nRemote Work: 5 Strategies for Creating Long Term Support\nMany people have been predicting that the pandemic will have a lasting impact on remote work. I came across an article the other day that stated prior to COVID-19, about 4% of the total U.S. workforce was working remotely.\nProjects # CCS811 Indoor Air Quality Sensor Driver in Rust\nWe spend an enormous amount of time indoors. The indoor air quality is often overlooked but it is actually an important factor in our health, comfort and even productivity. There are lots of things that contribute to the degradation of the indoor air quality over time.\nAdblockerGoogleSearch\nAn extension that removes ads from google search results and moves real results up!\nbunkerized-nginx\nDocker image secured by non-exhaustive list of features: HTTPS support with transparent Let\u0026rsquo;s Encrypt automation State-of-the-art web security, HTTP security headers, hardening etc.\nGuides # YubiKey for SSH, Login, 2FA, GPG and Git Signing\nI\u0026rsquo;ve been using a YubiKey Neo for a bit over two years now, but its usage was limited to 2FA and U2F. Last week, I received my new DELL XPS 15 9560, and since I am maintaining some high impact open source projects, I wanted the setup to be well secured.\nTraefik: canary deployments with weighted load balancing\nTraefik is the Cloud Native Edge Router yet another reverse proxy and load balancer. Omitting all the cloud-native buzzwords, what really makes Traefik different from Nginx, HAProxy, and alike is the automatic and dynamic configurability it provides out of the box.\nBuilding Serverless Microservices – Picking the right design\nIn the last article, we built a serverless microservice. But one microservice on its own doesn’t do much “by design”. In this post, we will start accelerating our microservice game, by adding a “simple” requirement that will shape our system.\nManagement # Optimize Onboarding\nIt takes roughly 2 weeks to form a habit; it takes roughly two weeks to get comfortable in a new environment. A common mistake is to treat a new report’s first couple weeks like college orientation - social, light hearted, get-to-know-you stuff.\nLearn The \u0026ldquo;Disagree and Commit\u0026rdquo; Exercise for Better Leadership\nWhat can make us incredibly valuable at work - our willingness to disagree openly and commit to helping others succeed or sticking to our arguments even when others have moved forward and a decision has been made.\nHow to (Actually) Change Someone’s Mind\nIf you’re a leader, it’s likely that not everyone who works with you will agree with the decisions you make — and that’s okay. Leadership involves making unpopular decisions while navigating complex relationships with colleagues, partners, and clients.\n","date":"10 February 2021","permalink":"/links/link-collection-february-2021/","section":"Links","summary":"Explore February 2021\u0026rsquo;s links, including projects, guides, and management insights. Highlights include air quality sensors, remote work strategies, and more.","title":"Link Collection: February 2021"},{"content":"Welcome to the January 2021 edition of the Link Collection! This post features curated articles and projects on tech trends, productivity tools, and management strategies for professionals and enthusiasts alike.\nGeneral # Why senior engineers get nothing done\nYou start with writing code and delivering fantastic results. You\u0026rsquo;re killing it, and everybody loves you! Rock on. Then your code hits production.\nEntropy Explained, With Sheep\nLet\u0026rsquo;s start with a puzzle. Why does this gif look totally normal?\nTech Trends for 2021 and Beyond\nHow much is being invested in Europe and worldwide in tech trends such as Blockchain, Artificial Intelligence, IoT and 3D Printing, both now and in the coming years, and which countries are ahead of the rest of Europe?\nProjects # Your next meeting always before your eyes\nMeetingBar works on macOS with your calendar. Join and create meetings in one click.\nHow to Use tmux on Linux (and Why It\u0026rsquo;s Better Than Screen)\nThe tmux command is a terminal multiplexer, like screen. Its advocates are many and vocal, so we decided to compare the two.\narthepsy/ssh-audit\nSSH-audit is a tool for ssh server auditing.\nGuides # Making a smart meter out of a dumb one for $4\nAs a geek who has a few servers and other devices at home, I can\u0026rsquo;t stop thinking about my power output. I always wanted live stats on my power consumption.\nAutomating your GitHub routine\nLike many developers in the realm of Software Engineering, we are using git as our version control system.\nBuilding a self-updating profile README for GitHub\nGitHub quietly released a new feature at some point in the past few days: profile READMEs.\nUsing Ansible to automate my Macbook setup\nI am soon going to get a new Macbook, and have been thinking about how to set it up quickly and easily.\nManagement # Thoughts on giving feedback\nA good, blameless feedback culture is essential for working together efficiently as it forms healthy relationships, fuels personal and professional growth and aligns us with common norms.\nExpiring vs. Permanent Skills\nRobert Walter Weir was one of the most popular instructors at West Point in the mid-1800s, which is odd at a military academy because he taught painting and drawing.\n","date":"3 January 2021","permalink":"/links/link-collection-january-2021/","section":"Links","summary":"Discover January 2021\u0026rsquo;s links, featuring articles on tech trends, productivity tools, and insightful guides for developers and leaders.","title":"Link Collection: January 2021"},{"content":"I\u0026rsquo;ve wanted to decrease my reliance on Google products recently and have decided a quick way for me to do this is to host my own CardDav and CalDav server using Radicale\nCalDav can be used to host your own calendar server and CardDav is for your own contacts server.\nRadicale Configuration #Install Python #The Radicale application is written in python and as such the python package and pip are needed to set it up.\n[root@server ~]# yum -y install python36 Install Radicale #[root@server ~]# python3 -m pip install --upgrade radicale Create Radicale User and Group #[root@server ~]# useradd --system --user-group --home-dir /var/lib/radicale --shell /sbin/nologin radicale Create Radicale Storage #[root@server ~]# mkdir -p /var/lib/radicale/collections [root@server ~]# chown -R radicale:radicale /var/lib/radicale/collections [root@server ~]# chmod -R o= /var/lib/radicale/collections Create Radicale Config #Create the configuration file [root@server ~]# vi /etc/radicale/config\nAdd the following to the configuration file\n[server] hosts = 127.0.0.1:5232 max_connections = 20 # 100 Megabyte max_content_length = 100000000 # 30 seconds timeout = 30 ssl = False [encoding] request = utf-8 stock = utf-8 [auth] type = htpasswd htpasswd_filename = /var/lib/radicale/users htpasswd_encryption = md5 [storage] filesystem_folder = /var/lib/radicale/collections Add Radicale Users #Create a new htpasswd file with the user \u0026ldquo;user1\u0026rdquo; [root@server ~]# printf \u0026#34;user1:`openssl passwd -apr1`\\n\u0026#34; \u0026gt;\u0026gt; /var/lib/radicale/users Password: Verifying - Password:\nAdd another user [root@server ~]# printf \u0026#34;user2:`openssl passwd -apr1`\\n\u0026#34; \u0026gt;\u0026gt; /var/lib/radicale/users Password: Verifying - Password:\nCreate Radicale Systemd Script #Create the systemd script [root@server ~]# vi /etc/systemd/system/radicale.service\nAdd the following to the systemd service file 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [Unit] Description=A simple CalDAV (calendar) and CardDAV (contact) server After=network.target Requires=network.target [Service] ExecStart=/usr/bin/env python3 -m radicale Restart=on-failure User=radicale # Deny other users access to the calendar data UMask=0027 # Optional security settings PrivateTmp=true ProtectSystem=strict ProtectHome=true PrivateDevices=true ProtectKernelTunables=true ProtectKernelModules=true ProtectControlGroups=true NoNewPrivileges=true ReadWritePaths=/var/lib/radicale/collections [Install] WantedBy=multi-user.target Systemd Radicale Service #Reload Systemd #[root@server ~]# systemctl daemon-reload Start Radicale Service #[root@server ~]# systemctl start radicale Radicale Service Autostart #[root@server ~]# systemctl enable radicale Check the status of the service #[root@server ~]# systemctl status radicale View all log messages #[root@server ~]# journalctl --unit radicale.service By here you should be able to connect locally to http://127.0.0.1:5232. Next we will configure Nginx to sit in front of the Radicale service and proxy all requests.\nInstall Nginx #[root@server ~]# yum -y install nginx Nginx Configuration: #Add the following configuration to the server block in nginx.conf (Or this can be added to a virtual host)\nlocation /radicale/ { proxy_pass http://127.0.0.1:5232/; proxy_set_header X-Script-Name /radicale; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass_header Authorization; } Check Nginx Configuration #[root@server ~]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Start Radicale Service #[root@server ~]# systemctl restart nginx Open Firewall Ports #Open the firewall ports as needed (80: http or 443: https) [root@server ~]# firewall-cmd --add-port=80/tcp --permanent\n[root@server ~]# firewall-cmd --add-port=443/tcp --permanent [root@server ~]# firewall-cmd --reload Once you restart Nignx you should be able to access radicale on a normal http or https port by browsing to http://example.com/radicale/ and you should see the login screen.\nScreensot of Radicale Login Login To Radicale #User the username and password you created in the steps above to login to the Radicale portal.\nScreensot of Radicale Collection Create A Collection (Cal) #Click \u0026ldquo;Create new addressbook or calendar\u0026rdquo;\nScreensot of Radicale Cal Creation Fill it in with what ever details you want, then click create.\nScreensot of Radicale Cal Created You should now be able to add that url to a cal dav enabled client and authenticate and then you should be able to see and sync your calendar.\nFor further configuration options take a look at the Radicale Page\n","date":"23 September 2020","permalink":"/posts/radicale-carddav-and-caldav-server/","section":"Posts","summary":"To decrease reliance on google services this howto describes how to setup your own contact (carddav) and calendar (caldav) server","title":"Radicale CardDAV And CalDAV Server"},{"content":"Updating Gandi DNS Using the API and a Shell Script #In this tutorial, we will show you how to update your Gandi DNS records using the Gandi API and a shell script. This approach is useful for those who have dynamic IP addresses and need to keep their DNS records up-to-date.\nBy using the Gandi API and a shell script, you can automate the process of updating your DNS records, ensuring that your website or application is always available at the correct IP address.\nIn this tutorial, we will walk you through the process of creating a shell script to update your DNS records, and scheduling the script to run automatically.\nRead on to learn how to update your Gandi DNS records using the API and a shell script.\nFurther Reading If you\u0026rsquo;re interested in automating dynamic DNS updates with the Gandi API and Docker, you might find my follow-up article Automate Dynamic DNS Updates with Gandi API and Docker helpful. This article goes into more detail on how to set up the Docker container, including how to build the Docker image, run the container, and schedule tasks using cron. It also covers best practices for running Docker containers in production environments. Check out the article for more information and step-by-step instructions.\nDynamic DNS #Dynamic DNS is a way to associate a changing Dynamic IP address (usually residential xDSL connections) to a static domain name (DNS record)\nThis allows you to connect to home.example.com -\u0026gt; DNS Lookup \u0026amp; Resolution -\u0026gt; 203.0.113.78. The DNS entry for home.example.com is updated automatically at set intervals or when an IP address change is detected.\nDynamic DNS Providers #There are a number of Dynamic DNS providers that can be used, a well known provider is https://dyn.com/, but unfortunately some of these services come with a cost.\nGandi Live DNS #https://www.gandi.net/en provides a Live DNS Service\nLiveDNS is Gandi\u0026rsquo;s upcoming DNS platform, a completely new service that offers its own API and its own nameservers.\nThe new platform offers powerful features to manage DNS Zone templates that you can integrate into your own workflow. Features include bulk record management, association with multiple domains, versioning and rollback.\nImplementation #The below instructions will show you how to create a Dynamic DNS system using a single script and Gandi\u0026rsquo;s LiveDNS.\nPrerequisites #Make sure you have the following applications installed:\ncurl - https://curl.haxx.se/ jq - https://stedolan.github.io/jq/ Gandi LiveDNS API Key - Retrieve your API Key from the \u0026ldquo;Security\u0026rdquo; section in the Account Admin Panel Bash Script #Create a bash script and put it under \u0026ldquo;/usr/local/bin/dyndns_update.sh\u0026rdquo;. (This can of course be kept wherever you want)\nAdd the API key you got from the Gandi Account Panel.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 #!/bin/bash # This script gets the external IP of your systems then connects to the Gandi # LiveDNS API and updates your dns record with the IP. # Gandi LiveDNS API KEY API_KEY=\u0026#34;............\u0026#34; # Domain hosted with Gandi DOMAIN=\u0026#34;example.com\u0026#34; # Subdomain to update DNS SUBDOMAIN=\u0026#34;dynamic\u0026#34; # Get external IP address EXT_IP=$(curl -s ifconfig.me) #Get the current Zone for the provided domain CURRENT_ZONE_HREF=$(curl -s -H \u0026#34;X-Api-Key: $API_KEY\u0026#34; https://dns.api.gandi.net/api/v5/domains/$DOMAIN | jq -r \u0026#39;.zone_records_href\u0026#39;) # Update the A Record of the subdomain using PUT curl -D- -X PUT -H \u0026#34;Content-Type: application/json\u0026#34; \\ -H \u0026#34;X-Api-Key: $API_KEY\u0026#34; \\ -d \u0026#34;{\\\u0026#34;rrset_name\\\u0026#34;: \\\u0026#34;$SUBDOMAIN\\\u0026#34;, \\\u0026#34;rrset_type\\\u0026#34;: \\\u0026#34;A\\\u0026#34;, \\\u0026#34;rrset_ttl\\\u0026#34;: 1200, \\\u0026#34;rrset_values\\\u0026#34;: [\\\u0026#34;$EXT_IP\\\u0026#34;]}\u0026#34; \\ $CURRENT_ZONE_HREF/$SUBDOMAIN/A Run The Script #I would set this script to run via crontab every 30 minutes. This ensures with an IP change the Dynamic DNS would only be out of date for a maximum of 30 minutes.\nEdit crontab with the following command\n[root@server ~]# crontab -e Add the following lines to run the script every 30 minutes.\n*/30 * * * * /bin/bash /usr/local/bin/dyndns_update.sh Once the script runs it should update the dynamic.example.com dns entry with the external IP that was found by the script.\n","date":"12 November 2019","permalink":"/posts/dynamic-dns-using-gandi/","section":"Posts","summary":"This is a technical article about how to setup Dynamic DNS using Gandi.net Live DNS system.​","title":"Dynamic DNS Using Gandi"},{"content":"Background #Are you trying to extract the contents of an RPM file on your Mac? I found myself in a similar situation, wanting to view the standard contents of a configuration file stored inside an RPM. Here\u0026rsquo;s a guide on how to open and extract an RPM file on MacOS.\nProcedure #First, download and install Homebrew on MacOSX.\nMacBook:~ user$ /bin/bash -c \u0026#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\u0026#34; After successfully installing Homebrew, it\u0026rsquo;s time to install the rpm2cpio utility. This tool will be crucial to our task of extracting the RPM on MacOS.\nMacBook:~ user$ brew install rpm2cpio ==\u0026gt; Downloading https://formulae.brew.sh/api/formula.jws.json ######################################################################### 100.0% ==\u0026gt; Downloading https://formulae.brew.sh/api/cask.jws.json ######################################################################### 100.0% ==\u0026gt; Fetching rpm2cpio ==\u0026gt; Downloading https://ghcr.io/v2/homebrew/core/rpm2cpio/manifests/1.4-1 ######################################################################### 100.0% ==\u0026gt; Downloading https://ghcr.io/v2/homebrew/core/rpm2cpio/blobs/sha256:a0d766ccb ==\u0026gt; Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sh ######################################################################### 100.0% ==\u0026gt; Pouring rpm2cpio--1.4.arm64_ventura.bottle.1.tar.gz 🍺 /opt/homebrew/Cellar/rpm2cpio/1.4: 3 files, 3.2KB ==\u0026gt; Running `brew cleanup rpm2cpio`... With the rpm2cpio utility installed, you can now extract the RPM package in your MacOS. Run the following command to extract the contents of the RPM file.\nMacBook:~ user$ rpm2cpio chrony-4.3-1.el9.x86_64.rpm | cpio -idmv ./etc/chrony.conf ./etc/chrony.keys ./etc/dhcp/dhclient.d/chrony.sh ./etc/logrotate.d/chrony ./etc/sysconfig/chronyd ./usr/bin/chronyc ./usr/lib/.build-id ./usr/lib/.build-id/27 ./usr/lib/.build-id/27/22526e8b01c2e304dae76c95b96d08368d541b ./usr/lib/.build-id/bc ./usr/lib/.build-id/bc/b4a77a141da491a2df6664d74de0193e276d7c ./usr/lib/NetworkManager ./usr/lib/NetworkManager/dispatcher.d ./usr/lib/NetworkManager/dispatcher.d/20-chrony-dhcp ./usr/lib/NetworkManager/dispatcher.d/20-chrony-onoffline ./usr/lib/systemd/ntp-units.d/50-chronyd.list ./usr/lib/systemd/system/chrony-wait.service ./usr/lib/systemd/system/chronyd.service ./usr/lib/sysusers.d/chrony.conf ./usr/sbin/chronyd ./usr/share/doc/chrony ./usr/share/doc/chrony/FAQ ./usr/share/doc/chrony/NEWS ./usr/share/doc/chrony/README ./usr/share/licenses/chrony ./usr/share/licenses/chrony/COPYING ./usr/share/man/man1/chronyc.1.gz ./usr/share/man/man5/chrony.conf.5.gz ./usr/share/man/man8/chronyd.8.gz ./var/lib/chrony ./var/log/chrony 1253 blocks And there you have it, a simple and effective method to open and extract an RPM file on MacOS. Now you can navigate and explore the contents of your RPM file as needed.\nUpdates #2023-05-22: This article has been recently updated to reflect the latest commands for the Homebrew package manager and to illustrate the extraction process using a current RPM file.\n","date":"29 October 2019","permalink":"/posts/extract-an-rpm-package-on-macos/","section":"Posts","summary":"A step-by-step guide to opening and extracting RPM files on MacOS using Homebrew and the rpm2cpio utility. Learn how to install the necessary tools, extract RPM contents, and explore the files for configuration or documentation purposes.​","title":"How to open and extract RPM file on MacOS"},{"content":"","date":null,"permalink":"/tags/rpm/","section":"Tags","summary":"","title":"Rpm"},{"content":"","date":null,"permalink":"/tags/article/","section":"Tags","summary":"","title":"Article"},{"content":"Overview #If you manage a team, or are looking at hiring, you need to gauge people. Technical abilities are important, but they are not the most critical criteria. Most importantly is attitude. A person\u0026rsquo;s attitude shapes their behaviour toward people around them. In almost all environments and organisation, no one works alone. For developers, they could be working with other developers, with a DevOps Engineer or a project/product manager, etc.\nThe ability to collaborate well with people is extremely important and attitude drives that. A good employee or teammate requires less of their technical skillset, and more of their attitude when working with others. When evaluating people in a team, or in general in any organisation, we can categorise people in four types: Adders, Subtractors, Multipliers, and Dividers.\nThe Different Types #Adders #These are the type of people you want in your team. They always deliver with tremendous results. They are never a burden for the team as they hold their weight with excellent performance. They know what tasks need to be done and how to achieve it. They are capable of the work and they bring more benefits to the organisation than their costs.\nSubtractors #These are the type of people you want to avoid having. You can typically spot this type of person during the interview process. However, sometimes you\u0026rsquo;re stuck with them by inheritance. Subtractors are usually good people, well-liked employees. But their performance is not up to standard.\nDon\u0026rsquo;t mislabel subtractors alongside junior employees. Junior employees are new to the job and may not have all the skills required, so it\u0026rsquo;s understandable if their performance is coming up short. Subtractors are not new to the job yet they aren\u0026rsquo;t producing more than they cost. Sometimes subtractors have the required skillset but their results are sloppy or require assistants constantly from other team members.\nThe good news is we can coach and turn subtractors into adders, given that they have the ability to learn and they have an attitude toward learning. Once turned into adders, they will become a very loyal employee who will grown with the organisation.\nMultipliers #Multipliers are adders who not only perform well individually but can also motivate and help others. They are productive and tend to be very proactive and have leadership skills. They know how to collaborate, how to manage up and down. They know how to communicate with the internal team as well as external partners. Most importantly, they motivate, encourage, and lift the team spirit through their work energy.\nDividers #Dividers are subtractors that not only cannot perform, but also damage your team environment. They don\u0026rsquo;t have accountability for their work. They always come up with excuses instead of realising how they may have underperformed. They backtalk and form side conversations to bad-mouth someone or decision. They are toxic to your team environment. The longer you have them, the more damage they will do to your team culture and morale.\n","date":"11 August 2019","permalink":"/posts/asmd-the-types-of-people/","section":"Posts","summary":"This article looks at the different types attitudes of people using a simple categorisation of Adders, Subtractors, Multipliers and Dividers.","title":"The ASMD Types of People"},{"content":"","date":null,"permalink":"/tags/cache/","section":"Tags","summary":"","title":"Cache"},{"content":"","date":null,"permalink":"/tags/centos/","section":"Tags","summary":"","title":"Centos"},{"content":"","date":null,"permalink":"/tags/security/","section":"Tags","summary":"","title":"Security"},{"content":"Tripwire is an Intrusion Detection System. It is used to secure systems and creates a unique fingerprint of how a system is configured. It continually checks the system against this fingerprint and if there are any inconsistencies between the fingerprint and the current system it is logged and a report generated. This is a sure-fire way to tell if a system has been changed without your knowledge. This post will guide you through installation and configuration of Tripwire IDS running on a CentOS 7 system.\nInstall Tripwire #Install tripwire IDS from the yum repositories.\nAdd EPEL Repository #First enable the EPEL Repository.\n[root@server ~]# yum -y install epel-release Install the Tripwire Application #Install the Tripwire binaries.\n[root@server ~]# yum -y install tripwire Backup Original Configuration #Backup the original Tripwire configuration files before making any changes.\n[root@server ~]# mkdir ~/tripwire_backup [root@server ~]# cp /etc/tripwire/twcfg.txt ~/tripwire_backup/twcfg.txt [root@server ~]# cp /etc/tripwire/twpol.txt ~/tripwire_backup/twpol.txt Directory Checking #Change \u0026lsquo;LOOSEDIRECTORYCHECKING\u0026rsquo; to true.\n[root@server ~]# sed -i \u0026#39;/^LOOSEDIRECTORYCHECKING/ s/false/true/g\u0026#39; /etc/tripwire/twcfg.txt Create Keys #Create the keys to secure Tripwire.\n[root@server ~]# /usr/sbin/tripwire-setup-keyfiles Initialise DB #Initialise the Tripwire database. (A list of errors will be displayed these will be fixed later on, so are safe to ignore)\n[root@server ~]# tripwire --init A message should be displayed that the database was successfully generated.\nFix Errors #Tripwire checks a number of different settings on the system, it will check for a configuration that may not actually be included on your system and produce an error. This step will remove those errors. Create a folder for the update process and change into that directory.\n[root@server ~]# mkdir ~/tripwire_update [root@server ~]# cd ~/tripwire_update Collect all the errors and log them to a file.\n[root@server ~]# tripwire --check | grep \u0026#34;Filename:\u0026#34; | awk {\u0026#39;print $2\u0026#39;} \u0026gt;\u0026gt; ./tripwire_errors Copy the policy file\n[root@server ~]# cp /etc/tripwire/twpol.txt ~/tripwire_update/twpol.txt Create the bash script below to parse the errors file and fix the issues in the Tripwire policy file.\n[root@server ~]# cat \u0026lt;\u0026lt;\u0026#39;EOF\u0026#39; \u0026gt;\u0026gt; ~/tripwire_update/tripwire_fix_script.sh #!/bin/sh TWERR=\u0026#34;./tripwire_errors\u0026#34;; TWPOL=\u0026#34;./twpol.txt\u0026#34;; export IFS=$\u0026#39;\\n\u0026#39; for i in $(cat $TWERR); do if grep $i $TWPOL then sed -i \u0026#34;s!$i!# $i!g\u0026#34; $TWPOL fi done EOF Run the script.\n[root@server ~]# sh ./tripwire_fix_script.sh Copy the updated Tripwire policy file back to the original location.\n[root@server ~]# cp ~/tripwire_update/twpol.txt /etc/tripwire/twpol.txt Update the tripwire database from the tripwire policy that was created.\n[root@server ~]# tripwire --update-policy -Z low /etc/tripwire/twpol.txt Run a tripwire check. This check will generate a Tripwire Report usually located in /var/lib/tripwire/report/\n[root@server ~]# tripwire --check Run a check #[root@server ~]# /etc/cron.daily/tripwire-check Update (Again) #Update again to fix the errors that will be displayed because we have updated the policy file. Change YYYYMMDD \u0026amp; HHMMSS to the date and time that you ran the first check.\nTo find the latest one just run\n[root@server ~]# ls -la /var/lib/tripwire/report/ Update the errors\n[root@server ~]# tripwire --update --twrfile /var/lib/tripwire/report/server-YYYMMDD-HHMMSS.twr Email Reports #Make sure you have mail installed\n[root@server ~]# yum -y install mailx Next change the Tripwire cron job to send an email report out.\nOpen the cron job file for the tripwire check\n[root@server ~]# vi /etc/cron.daily/tripwire-check Change the following line\ntest -f /etc/tripwire/tw.cfg \u0026amp;\u0026amp; /usr/sbin/tripwire --check to (Make sure to update the server name and email address of where you want the report to go to)\ntest -f /etc/tripwire/tw.cfg \u0026amp;\u0026amp; /usr/sbin/tripwire --check | /bin/mail -s \u0026#34;File Integrity Report (Tripwire) - servername\u0026#34; user@domain.tld Directory Checking (Revert) #Now we need to set Loose Directory Checking back to false.\n[root@server ~]# sed -i \u0026#39;/^LOOSEDIRECTORYCHECKING/ s/true/false/g\u0026#39; /etc/tripwire/twcfg.txt Testing #We need to test the cronjob to make sure that it will run, create the report and email it out to the address specified.\n[root@server ~]# /etc/cron.daily/tripwire-check If no errors were encountered you should have a working tripwire setup, if any changes are made to your file system you will see them in the report that gets emailed out to you everyday. If you have made changes to the system don\u0026rsquo;t forget to update, otherwise you will just see the errors growing and wont be able to tell if something has actually changed.\n","date":"11 June 2019","permalink":"/posts/tripwire-ids-security-on-centos-7/","section":"Posts","summary":"This technical article describes how to setup Tripwire IDS on a CentOS 7 system to protect it from any intrusions.","title":"Tripwire IDS Security on CentOS 7"},{"content":"Varnish is a web cache and http accelerator. It is used improve the performance of dynamic websites by caching pages and then serving the cached version rather than dynamically creating them every time they are requested.\nInstall Varnish #Install Varnish from the Varnish repositories.\nAdd Varnish Repository #The first thing you need to do is add and enable the Varnish repository. Follow the link to install the correct version https://www.varnish-cache.org/installation/redhat\nInstall the Varnish Application #[root@server ~]# yum install varnish Configure Varnish to work with Apache #We now need to enable the configuration.\nEnable Configuration #Open the varnish config file\n[root@server ~]# vi /etc/sysconfig/varnish Scroll down to the Alternative Configurations. The easiest way to configure Varnish is to enable configuration 2. Comment out with a # all the other alternative configurations. The configuration should look like the below snippet.\n## Alternative 2, Configuration with VCL # # Listen on port 80, administration on localhost:6082, and forward to # one content server selected by the vcl file, based on the request. Use a # fixed-size cache file. # DAEMON_OPTS=\u0026#34;-a :80 \\ -T localhost:6082 \\ -f /etc/varnish/default.vcl \\ -u varnish -g varnish \\ -S /etc/varnish/secret \\ -s file,/var/lib/varnish/varnish_storage.bin,1G\u0026#34; Line 7 tells Varnish to listen on port 80 for web traffic. Line 8 tells Varnish to listen on localhost port 6082 for admin traffic. Line 9, tells Varnish to load the default.vcl. Line 10 is the user and group to varnish under. Line 11 is the Varnish secret key. Line 12 is what method for Varnish to store the cached information and to what size to allow it to grow.\nConfigure Default VCL #Open the default vcl file.\n[root@server ~]# vi /etc/varnish/default.vcl edit the \u0026ldquo;backend default\u0026rdquo; section to look like the below.\nbackend default { .host = \u0026#34;127.0.0.1\u0026#34;; .port = \u0026#34;8080\u0026#34;; } This tells Varnish to send all traffic to localhost (127.0.0.1) on port 8080. This is the port and ip that apache will be listening on.\nConfigure Apache to work with Varnish #Next we need to configure Apache to work with Varnish.\nConfigure Apache (Main) #Open the apache config file\n[root@server ~]# vi /etc/httpd/conf/httpd.conf Change the \u0026ldquo;Listen\u0026rdquo; line to the following\n# # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, in addition to the default. See also the # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses (0.0.0.0) # Listen 127.0.0.1:8080 This makes Apache listen on 127.0.0.1 on port\nConfigure Apache (Virtual Hosts) #If you run virtual hosts on apache you will also need to reconfigure them to listen on 127.0.0.1 on port 8080 too. Change the \u0026ldquo;NameVirtualHost\u0026rdquo; to look like this\nNameVirtualHost 127.0.0.1:8080 You will also need to change each Virtual Host section to listen on 127.0.0.1 on port 80. Below is an example.\n\u0026lt;VirtualHost 127.0.0.1:8080\u0026gt; ServerName example.com ServerAdmin webmaster@example.com DocumentRoot /var/www/example.com/htdocs ErrorLog /var/www/example.com/logs/www.example.com.error.log CustomLog /var/www/example.com/logs/www.example.com.access.log combined \u0026lt;/VirtualHost\u0026gt; Forward User IPs to Logs #You may have seen that the web servers logs only display 127.0.0.1 as the source IP. This causes problems when you need to run stats on the log file, as you loose quite a bit of information from loosing the IPs. This is quite an easy fix.\nUpdate default VCL #Open the default.vcl\n[root@server ~]# vi /etc/varnish/default.vcl You need to update the default vcl with the below code. This will forward the source IP.\nbackend default { .host = \u0026#34;127.0.0.1\u0026#34;; .port = \u0026#34;8080\u0026#34;; } sub vcl_recv { remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } Apache Custom Log #We need to create a custom log to deal with the information from Varnish.\nCreate the following file\n[root@server ~]# vi /etc/httpd/conf.d/varnish-log.conf with the following content\nLogFormat \u0026#34;%{X-Forwarded-For}i %l %u %t \\\u0026#34;%r\\\u0026#34; %\u0026gt;s %b \\\u0026#34;%{Referer}i\\\u0026#34; \\\u0026#34;%{User-Agent}i\\\u0026#34;\u0026#34; varnishcombined Update Web Hosts #You will now need to update the web hosts to state that the log format will be \u0026ldquo;varnishcombined\u0026rdquo; below is an example.\n\u0026lt;VirtualHost 127.0.0.1:8080\u0026gt; ServerName example.com ServerAdmin webmaster@example.com DocumentRoot /var/www/example.com/htdocs ErrorLog /var/www/example.com/logs/www.example.com.error.log #CustomLog /var/www/example.com/logs/www.example.com.access.log combined CustomLog /var/www/example.com/logs/www.example.com.access.log varnishcombined \u0026lt;/VirtualHost\u0026gt; As you can see from the example above, the old \u0026ldquo;CustomLog\u0026rdquo; is now commented out and the new \u0026ldquo;CustomLog\u0026rdquo; with the varnishcombined entry is active.\nRestart Services #Restart Apache #[root@server ~]# /sbin/service httpd restart Restart Varnish #[root@server ~]# /sbin/service varnish restart Set Auto Start #Auto Start Apache #[root@server ~]# /sbin/chkconfig httpd on Auto Start Varnish #[root@server ~]# /sbin/chkconfig varnish on Thats it you now have a working Apache Web Server fronted with a Varnish Web Cache.\n","date":"11 June 2019","permalink":"/posts/varnish-web-cache-on-centos/","section":"Posts","summary":"This technical article will walk you through setting up a Varnish web cache to cache your website.","title":"Varnish Web Cache on CentOS"},{"content":"","date":null,"permalink":"/tags/web/","section":"Tags","summary":"","title":"Web"},{"content":"","date":null,"permalink":"/tags/rhel/","section":"Tags","summary":"","title":"Rhel"},{"content":"Overview #This post describes how to enter single usermode for Redhat 7.\nModify Boot Settings #At the GRUB 2 type the \u0026ldquo;e\u0026rdquo; key to edit the current kernel line\nMove the cursor down to the kernel line, this is the line starting with linux16.\nOn this line remove the rgdb and quiet flags\nand then add the following rd.break enforcing=0\nrd.break will break the boot sequence at an early stage before the system boots fully. enforcing=0 puts SELinux into permissive mode.\nOnce you have made the edits above press Ctrl+x to resume the boot process using the new flags.\nThe system will continue to boot and you should be dropped into a command prompt if you entered the flags correctly.\nRemount Partitions #To edit the filesystem you have to remount it as read/write.\nswitch_root:/# mount -o remount,rw /sysroot then chroot to the mounted partition\nswitch_root:/# chroot /sysroot System Modifications #Now yo are free to make the modifications to your system, the example below shows you how to reset the root password which is a common reason to go into single user mode.\nChange The Root password #sh-4.2# passwd root Changing password for user root. New passwd: mypassword Retype new password: mypassword passwd: all authentication token updated successfully. sh-4.2# exit exit switch_root:/# exit logout ","date":"9 July 2018","permalink":"/posts/single-usermode-rhel7/","section":"Posts","summary":"This technical article describes how to get into single user mode on Redhat 7 OS.","title":"Single Usermode RHEL7"},{"content":"","date":null,"permalink":"/tags/cloud/","section":"Tags","summary":"","title":"Cloud"},{"content":"Businesses are continually looking for ways to use cloud computing to reach their goals. With so many great benefits to offer, cloud computing is definitely become the way of the future. During 2017 we’ve seen an increase in cyber attacks such as the WannaCry ransomware and CIA Vault 7 hack, making it even more important to ensure that security remains one of the most important features of cloud computing.\nAnother important factor is cost; cloud storage prices are also falling, allowing for a new era to emerge during the next few months. When we look at new cloud technology trends in 2018, here are the ones you should watch:\nContainer Orchestration with Kubernetes #One of the most talked about technologies is undoubtedly the role Kubernetes will play in cloud computing in 2018. Kubernetes – much like Docker for containers – has become the cloud orchestrator of choice. Kubernetes can be used by developers to easily migrate and manage software code.\nKubernetes has been adopted throughout the industry, including Docker and Microsoft Azure, showing just how effective this open-source container orchestration system is. It provides simpler cloud deployment and efficient management.\nCloud Cost Containment #With the recent announcement from AWS that they will be providing per-second billing for EC2 instances, other providers are also expected to announce updated pricing plans. In general, it is much easier to calculate the cost for single cloud provider as opposed to calculating the cost in a multi-cloud environment. Multi-cloud environments are difficult because there are different pricing plans for cloud providers. With different cloud service pricing and consumption plans available, pricing can vary greatly between providers.\nServerless Architecture #One of the great benefits of cloud computing is the ability to use extra resources and pay for what you use. This model allows for a VM, or instance, to be a unit for an additional compute resource. This means a ‘function’ has become an even smaller unit of use. It’s cost-efficient for the cloud provider to manage and scale resources on demand in the cloud, reducing all the heavy lifting that was usually required. There is a limitless supply of virtual machines, so there are no upfront costs and a lot of flexibility exists.\nCloud Monitoring as a Service (CMaaS) #Another popular trend that comes from the growing demand for hybrid cloud solutions is Cloud Monitoring as a Service (CMaaS). CMaaS is used to monitor the performance of multiple servers that are interdependent to the service delivery of a business. These services should be independent of the providers themselves and it can be used to monitor in-house environments and host various cloud services by installing gateways to the environment.\nCloud Facilitation for IoT #Gartner Research predicts that there will be around 20 billion mobile devices worldwide by 2020. With so many devices around, the cloud will play a much more significant role. You’ll also need more space to store data such as documents, videos and images, which all help drive the need for IoT in so many ways. We should see a lot of development towards IoT in 2018.\nMulti Cloud Strategy #Multi cloud strategies will become a dominant factor in 2018. It allows organizations to deploy different workloads and separately manage them. International Data Corporation predicts that more than 85 percent of enterprise IT corporation will adopt multi cloud technology by 2018. Organizations can save significantly be adopting a multi cloud strategy as they won’t be locked in with only one provider. Enterprises can save millions per year.\nThe Popularity of Cloud Based Big Data Mining #Many companies are launching IoT applications in 2018 and they will rely heavily on big data generated from these applications. However, they don’t necessarily have a great way to mine the data, which is where cloud technology comes in. Cloud based big data mining will definitely see an increase this year, helping companies to use the data from their applications.\nProactive Cloud Analytics with AI #AI can be seen in many areas of our lives; just look at digital assistants like Siri and Cortana, as they all use AI to provide useful information and execute tasks. Companies will incorporate AI into their analytics streams to make proactive business decisions so that they can automate their response and allow for actionable information and recommendations.\nCloud Security Will Remain a Priority #Cloud computing is still emerging and as such, requires a different approach to security than traditional IT infrastructures. In 2018, cloud security will be more important than ever and this offers a great opportunity for cloud solution providers to come up with a robust security solution that is effective for their customers.\nDuring 2018 we will definitely see cloud becoming more strategic, with the help of a few great technologies. It is expected that the adoption of the services above will help to increase performance and automation in terms of cloud computing.\n","date":"18 June 2018","permalink":"/posts/cloud-technology-to-watch-2018/","section":"Posts","summary":"In this article we look at some cloud technology to keep an eye on in 2018.","title":"Cloud Technology to watch in 2018"},{"content":"","date":null,"permalink":"/tags/technology/","section":"Tags","summary":"","title":"Technology"},{"content":"","date":null,"permalink":"/tags/blog/","section":"Tags","summary":"","title":"Blog"},{"content":"","date":null,"permalink":"/tags/hugo/","section":"Tags","summary":"","title":"Hugo"},{"content":"In a previous post I mentioned that I am moving to Hugo from Wordpress. One of the main reasons for this is to be able to store my blog in Github to allow for version control.\nAutomating Hugo #One thing that I missed from Wordpress was the automated way that it works. In Wordpress you write a draft post, add some images and then publish, thats it. For Hugo you create a post, then use Hugo cli to generate the static content, then upload this to a web server and then its published for the world to see. Too many manual steps that makes life difficult.\nHigh Level Automation Workflow # Screensot of Hugo WQorkflow High Level Automation Steps # Create articles/posts in markdown (Local) Generate the static HTML (Local) Push static HTML to Github (Remote) Github fires a webhook to my web server (Remote) Webhook invokes a pull of the static content from Github (Server) Automated pull of repository Static content is served from the server (Server) Setup Steps #Create a Github Repository #Create a Github repository for the public folder that is generated by hugo.\nScreensot of Repo Creation Creation Create a Webhook #Create the webhook within the repository you just created, this will fire when new code is pushed to this repository.\nScreensot of Webhook Creation Setup the Webhook Server #Use webhook server for a lightweight webhook server and install this on the webserver.\nCreat the hooks.json below this has the configuration for the webhook.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [ { \u0026#34;id\u0026#34;: \u0026#34;deploy-public\u0026#34;, \u0026#34;execute-command\u0026#34;: \u0026#34;/somepath/deploy-public.sh\u0026#34;, \u0026#34;command-working-directory\u0026#34;: \u0026#34;/somepath\u0026#34;, \u0026#34;trigger-rule\u0026#34;: { \u0026#34;and\u0026#34;: [ { \u0026#34;match\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;payload-hash-sha1\u0026#34;, \u0026#34;secret\u0026#34;: \u0026#34;**********\u0026#34;, \u0026#34;parameter\u0026#34;: { \u0026#34;source\u0026#34;: \u0026#34;header\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;X-Hub-Signature\u0026#34; } } }, ] } } ] Bash Script #Next create a bash script deploy-public.sh to actually carry out the work of archiving the existing public folder and then replacing it with a cloned version from Github.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 #!/bin/bash #Name: deploy-public.sh #Set Vars LOGFILE=\u0026#34;/somepath/log.log\u0026#34; TIMESTAMP=`date \u0026#34;+%Y-%m-%d_%H%M%S\u0026#34;` DIRECTORY=\u0026#34;/somepath/virtuallytd.com\u0026#34; ## Backup of current site if [ ! -d \u0026#34;${DIRECTORY}/archives\u0026#34; ]; then mkdir ${DIRECTORY}/archives fi cd ${DIRECTORY} tar -cf ./archives/public-${TIMESTAMP}.tar ./public gzip -7 ./archives/public-${TIMESTAMP}.tar rm -fR ./archives/public-${TIMESTAMP}.tar ## Remove the old public site rm -fR ${DIRECTORY}/public ## Clone the new public site git clone git@gitserv:virtuallytd/blog-public.git ./public Start Webhook Server #With all the above in place you should be able to start the webhook server and have it listen for connections.\nThe verbose flag is set for testing the setup. Webhook server will bind to port 9050 on the external IP you set. This can also be proxied as not to expose the service externally.\n/somepath/webhook -hooks /somepath/hooks.json -verbose -ip \u0026lt;External IP\u0026gt; -port 9050 Testing #If you test a connection and all is working well you should see some output like this from the webhook command and the public folder should have been updated and an archive created.\n[root@server ~]# /usr/local/bin/webhook -hooks /etc/hooks.json -verbose -ip \u0026lt;External IP\u0026gt; -port 9050 [webhook] 2018/06/17 20:44:56 version 2.6.8 starting [webhook] 2018/06/17 20:44:56 setting up os signal watcher [webhook] 2018/06/17 20:44:56 attempting to load hooks from /somepath/hooks.json [webhook] 2018/06/17 20:44:56 found 1 hook(s) in file [webhook] 2018/06/17 20:44:56 loaded: deploy-public [webhook] 2018/06/17 20:44:56 serving hooks on http://\u0026lt;External IP\u0026gt;:9050/hooks/{id} [webhook] 2018/06/17 20:44:56 os signal watcher ready [webhook] 2018/06/17 20:45:11 [xxxxxx] incoming HTTP request from \u0026lt;External IP\u0026gt;:42138 [webhook] 2018/06/17 20:45:11 [xxxxxx] deploy-public got matched [webhook] 2018/06/17 20:45:11 [xxxxxx] deploy-public hook triggered successfully [webhook] 2018/06/17 20:45:11 200 | 644.658µs | \u0026lt;External IP\u0026gt;:9050 | POST /hooks/deploy-public [webhook] 2018/06/17 20:45:11 [xxxxxx] executing /somepath/deploy-public.sh (/somepath/deploy-public.sh) with arguments [\u0026#34;/somepath/deploy-public.sh\u0026#34;] and environment [] using /somepath as cwd [webhook] 2018/06/17 20:45:13 [xxxxxx] command output: Cloning into \u0026#39;./public\u0026#39;... [webhook] 2018/06/17 20:45:13 [xxxxxx] finished handling deploy-public If you have any issues with this make sure to check the logging from the webhook server and also check in Github under the webhook page for any responses/errors.\nAutomate Pull from Github #To create an automated pull of data from the github repository we need to configure a deployment key. This key will allow a git pull (Read Only).\nCreate the SSH key #Generate an SSH Key on the server\n[root@server ~]# ssh-keygen -t rsa -C \u0026#34;deploykey@example.com\u0026#34; Save the key somewhere on the system.\nConfigure SSH Credentials #Edit the file [root@server ~]# vi /root/.ssh/config\nAdd the following lines Host gitserv Hostname github.com User git IdentityFile /root/.ssh/id_rsa IdentitiesOnly yes\nAdd Public Deploy Key to Github #Open your repository and go into settings \u0026gt; Deploy Keys.\nIn here add the public key of the keypair we generated in the step before and click save.\nNow when the script we created earlier invokes a git pull, it will use this configuration and use the deploy ssh key to connect to github.\nManaging the Webhook Service #To efficiently manage the webhook server, a systemd service can be created. This allows the server to start and stop the webhook service automatically.\nCreating a systemd Service #Create a file named webhook.service in the /etc/systemd/system/ directory with the following content:\n[Unit] Description=Webhook Service After=network.target [Service] Type=simple ExecStart=/usr/local/bin/webhook -hooks /etc/hooks.json -verbose -ip \u0026lt;External IP\u0026gt; -port 9050 ExecStop=/usr/local/bin/stop_webhook_script.sh Restart=on-failure [Install] WantedBy=multi-user.target Creating the Stop Script #Create a script stop_webhook_script.sh to stop the webhook service:\n#!/bin/bash # Find and stop the webhook process PID=$(ps -ef | grep \u0026#39;/usr/local/bin/webhook\u0026#39; | grep -v grep | awk \u0026#39;{print $2}\u0026#39;) if [ ! -z \u0026#34;$PID\u0026#34; ]; then kill $PID echo \u0026#34;Webhook service stopped.\u0026#34; else echo \u0026#34;Webhook service is not running.\u0026#34; fi Place this script in /usr/local/bin and make it executable with chmod +x /usr/local/bin/stop_webhook_script.sh.\nManaging the Service #Enable the service to start on boot with sudo systemctl enable webhook.service. Start it with sudo systemctl start webhook.service and stop it with sudo systemctl stop webhook.service.\n","date":"17 June 2018","permalink":"/posts/hugo-deployment-automation/","section":"Posts","summary":"In this technical article we look at the process of automating a Hugo deployment from a Github commit.​","title":"Hugo Deployment Automation"},{"content":"","date":null,"permalink":"/tags/ai/","section":"Tags","summary":"","title":"Ai"},{"content":"Cloud computing already plays an important role in our modern lives, but recent developments in artificial intelligence (AI) coupled with the improvements in programming promises a whole new age of cloud computing. We’ll take a closer look at how that technology is quickly emerging and how it will have an impact on our daily lives.\nEvery person with technical knowledge knows that cloud technology brings huge potential and that it has already influences how businesses and people store data and process information. But because cloud technology is fairly new, companies have to think about how it will evolve over time. Things like the rise of mobile technology and the Internet of Things (IoT) have resulted in changes to the cloud - but now there’s something new on everybody’s lips: artificial intelligence. It could improve cloud technology in so many ways.\nWhen IBM spoke about the combination of AI and cloud, they said that it\nPromises to be both a source of innovation and a means to accelerate change.\nThe cloud can help to provide AI with the information it needs to learn, while AI can provide the cloud with more data. This relationship can help to completely transform how AI is developed and the fact that cloud companies such as IBM are spending a lot of time and resources into AI, shows that this is a real possibility.\nCloud technology is spread among a number of servers in various languages with huge data storage. Companies can use this to create automated solutions for their customers. Cloud computing is getting more powerful with AI, making it possible for companies to use AI cloud computing to reach long term goals for their customers.\nAnother important aspect of combining AI with the cloud, is that it can potentially change the manner in which the data was stored earlier and processed. This has huge potential and will allow professionals to look over the boundless possibilities for the future.\nAlthough cloud computing on its own has the capability to become a significant technology in many fields, the combination of cloud and AI will enhance it. Cloud computing will be much easier to scale and manage with the help of artificial intelligence. What’s more, the more businesses get on the cloud, the more it needs to be integrated with AI to remain efficient. There will come a point in time when cloud technology can’t exist without AI.\nA Deeper Understanding of AI #Artificial intelligence is much like an iceberg, as there is a lot more hidden that first meets the eye. AI is yet to show its true potential, and it is changing the world of computing together with cloud technology. In fact, it’s believed to be the future of computing.\nAI has the potential to further amplify the amazing capabilities of cloud computing, as it provides tremendous power. It allows machines to react and think like humans do, and helps machines to effectively analyze and learn from historical data, while identifying patterns and making real-time decisions. This may very well lead to an automated process that will virtually eliminate the possibility of human error.\nTech companies can now create AI which can learn. A good example of this is when an AI beat the world’s best Go player. How? By playing millions of games with itself and learning about strategies that players have not yet considered.\nOf course, AI has far better capabilities than just playing games. It is becoming a major player in conversation, where voice-activated AI systems can respond to human commands.\nWhile we are already enjoying assistants like Cortana which can respond to voice commands, tech companies are focusing on developing AI systems that can learn how to respond differently. There is still a lot to be done, but the goal is for an AI to communicate like a human.\nCombining AI and the Cloud #As mentioned, companies who specialize in either AI or cloud are dedicating more of their time and resources into learning both technologies and its capabilities. Basically, cloud AI technologies take one of two forms, it’s either a platform like Google Cloud Machine Learning, which combines machine learning with the cloud, or they are AI cloud services such as IBM Watson.\nWired recently reported on how companies are relying on IBM Watson to help fight cybercrime. But it’s not as simple as simply plugging in the technology and letting it work; Watson has to be taught how to deal with hackers and cyber criminals, and it becomes more effective over time as it stores information.\nIt\u0026rsquo;s interesting to note that while Watson knows so much, and can read far more reports than humans can, it still makes odd mistakes. That’s why researchers are helping Watson and guiding it to think correctly and eventually make no mistakes. At this point in time, AI, cloud, and humans all need each other in some way.\nBy combining AI and the data stored with technology, both AI and humans can analyze more and gather more data than ever before. Tech experts have indicated that this may be the year when AI becomes a significant role player in our daily lives and that its capabilities will only be improved with the development of cloud technology.\n","date":"13 June 2018","permalink":"/posts/does-ai-have-a-future-in-cloud-computing/","section":"Posts","summary":"This article discusses the whether AI has a future in cloud computing","title":"Does AI Have A Future in Cloud Computing"},{"content":"","date":null,"permalink":"/tags/containers/","section":"Tags","summary":"","title":"Containers"},{"content":"","date":null,"permalink":"/tags/virtualization/","section":"Tags","summary":"","title":"Virtualization"},{"content":"One of the most popular topics these days concerns containers, and what their role is. Containers have become increasingly important recently, mainly thanks to Docker. Various major providers such as IBM, VMware and Amazon Web Services have all embraced containers with open arms. As a result, this discussion has become a very popular topic and people are asking whether containers will be taking over and replace virtual machines.\nWhat Are Containers? #Containers essentially aren\u0026rsquo;t new, as they became popular a few years ago when Docker unveiled a new way to manage applications simply by isolating specific codes. This refers to a piece of lightweight software that has everything required to successfully run an application. Multiple containers can run on the same operating system and share resources.\nContainers are a hot topic these days, as the world’s top IT companies are using them. They promise a streamlined method of implementing infrastructure requirements, and they also offer a great alternative to virtual machines. In short, if anything goes wrong in the container, it only affects that single container, and not the whole server.\nWhat Are Virtual Machines? #A virtual machine refers to an operating system that fulfills various functions on software instead of hardware. A hypervisor can abstract applications from the specific computer, which allocates resources such as network bandwidth and memory space, to multiple virtual machines. With this technology, service providers can increase network functions running on expensive nodes automatically. Hypervisors work to separate an operating system and applications from the physical hardware. They allow the host machine to operate various virtual machines as guests and thereby maximize the use of resources such as network bandwidth and memory.\nHypervisors metaphorically died when Intel launched their Intel-VTx chip. Before this, Xen and VMware had two different ways in approaching hypervisor capabilities, namely paravirtualization and binary translation. Arguments were held about which was best and faster than the other, but as soon as Intel VTx came along, it was the winner and both Xen and VMware started using this chip.\nAs we move towards cloud applications there is a need to standardize underlying operating systems as you can’t get the same efficiency when you run 10 different operating systems. Whether you are moving towards PaaS or containers, either way, you are slowly moving away from heterogeneity.\nWhy Are Containers So Popular? #In general, containers are much more effective than virtual machines, simply because of the way in which they allocate resources. Containers run in an isolated environment and they have all the necessary resources to run an application. The remaining resources that are not used, can be utilized to run other applications, and as a result, containers can run two or three times as many applications as an individual server. Apart from increasing the efficiency of a system, this technology also allows us to save money by not having to invest in more servers in order to handle multiple processes.\nAnother reason why containers are seen as supporting virtual machines, is the fact that they can handle a quicker boot up process. With a typical virtual machine taking up to around a minute to boot, a container can do this in a micro second.\nPaaS tools such as Cloud Foundry, and systems such as Mesos and Kubernetes are already designed to scale your workload drastically as they detect performance failures and take various proactive steps to deal with them.\nContainers have a minimalist structure and that is a key differentiator. Unlike virtual machines, they don’t need a full operating system installed in the container, and don’t need a copy of the hardware. They operate with the minimum amount of resources and they are designed to perform the task they were designed for. A container’s ephemeral nature is another distinguishing characteristic. Containers can be installed and removed without any major disruption to the system. If an experiment should fail, the newer version can be rolled back and replaced. This is a new way of managing a data center and it’s key to the overwhelming interest that technology companies have expressed in Docker and its associated technologies recently.\nVirtual Machines Are Still Useful #Even though containers have many advantages to offer over virtual machines, they are not without fault. One of the biggest issues that comes with containers is its security. Because of the fact that containers use the same operating system, a security breach can occur much easier. A security breach can allow access to the entire system, in comparison to virtual machines. Also, since many container applications are available online, it opens up the window for additional security threats. If the software is infected with malware, which has the ability to spread to the entire operating system.\nSince containers have their advantages and disadvantages, it’s safe to say that virtual machines are not going anywhere – yet. They will likely not replace virtual machines completely, as these technologies complement each other rather than replacing each other. Hybrid systems are currently being develop to utilize the best advantages of both.\n","date":"4 March 2018","permalink":"/posts/will-hypervisors-be-replaced-by-containers/","section":"Posts","summary":"This article discusses if hypervisor technology will be replace containers technology.","title":"Will Hypervisors Be Replaced By Containers"},{"content":"","date":null,"permalink":"/tags/architecture/","section":"Tags","summary":"","title":"Architecture"},{"content":"","date":null,"permalink":"/tags/infrastructure/","section":"Tags","summary":"","title":"Infrastructure"},{"content":"Hyperconvergence refers to a framework that combines networking, computing and storage into one system in an effort to reduce the complexity of data centers and to increase scalability. Hyperconverged platforms include a hypervisor for virtualized networking and computing, and typically run on basic server systems.\nThe term hyperconverged infrastructure was coined by Forrester Research and Steve Chambers in 2012 to describe an infrastructure that virtualizes all the elements of a conventional system. This infrastructure typically runs on standard off-the-shelf servers.\nToday, companies typically use this infrastructure for virtual desktop infrastructure, remote workloads, and general-purpose workloads. In some cases, companies use it to run high performance storage, mission critical applications, and server virtualization.\nThe Benefits #The benefits of hyperconvergence include the fact that it is a hardware-defined system that is geared toward a purely software-defined environment where every element runs on commercial servers. The convergence of elements is facilitated by a hypervisor. These systems are made up of direct-attached storage and includes the ability to plug and play into a pool of data-like systems. All physical resources reside on one platform for software and hardware layers, and as an added benefit, these systems eliminate the traditional data-center inefficiencies and reduces total cost of ownership.\nThe servers, storage systems and networking switches are all designed to work together as one system, so it increases ease of use and improve efficiency. Companies can start small and grow bigger as scalability will always be an added benefit. It will also lead to cost savings in terms of power and space, and the avoidance of licensed backup and recovery software.\nThe potential impact is that companies will no longer need to rely on various different storage systems, and it will likely further simplify management and increase resource utilization rates.\nThere is always pressure on an IT department to provide resources instantly, data volume growth is unpredictable, and software defined storage promises great efficiency gains. These are just some of the trends taking place, which is some of the reasons why hyperconverged infrastructure has become so popular in recent years.\nHow Does Hyperconvergence Differ From Converged? #One major difference is that hyperconvergence adds more levels of automation and deeper levels of abstraction. This infrastructure involves preconfigured software and hardware combined in a single system with simplified management.\nWhere legacy systems relied on separate storage, networks and servers, hyperconvergence allows for the simplicity and reliability of using one single system. This also reduces the risk of failure as silos created by traditional infrastructure present barrier to progress and change.\nThis technology will simplify datacenter operations by streamlining deployment, management, and scaling of resources. This is achieved by combining the server and storage resources with intelligent software. Separate servers and storage networks can be replaced with a single solution to create a scalable, agile datacenter solution.\nThe Components Of Hyperconverged Solutions #There are several components that form a hyperconverged solution, including:\nA Distributed Data Plane: This runs across a collection of nodes and deliver networking, virtualization and storage services for applications. This can either be container-based applications or VMs. A Management Plane: This allows for easy administration of all resources with the help of a single view and also eliminates the need for separate servers, virtualization, and storage network solutions. Almost all modern hyperconverged solutions are 100 percent software defined. There is no dependency on hardware, as each cluster runs a hypervisor – such as VMware, Microsoft Hyper-V or Nutanix AHV. How Is It Sold #Hyperconverged technology is available as a software-only model, a reference architecture, or an appliance. You can expect bundled capabilities such as data deduplication, data protection, snapshots, compression and WAN optimization, as well as disaster recovery and backup as part of the vendor’s offering.\nThere are various specialist vendors that include SimpliVity, Nutanix and Pivot3. There are also a few big system vendors that entered the market, such as Dell-EMC, Cisco and HPE. The market for hyperconverged integrated systems (HCIS) is predicted to reach nearly $5 billion by 2019, which represents 24 percent of the overall market, as technology moves to mainstream use.\nAt the Gartner Infrastructure, Operations \u0026amp; Data Center Summit in Australia, Andrew Butler, vice president at Gartner, said\nThis evolution presents IT infrastructure and operations leaders with a framework to evolve their implementations and architectures.\nHe believes that HCIS is not a destination, but an \u0026ldquo;evolutionary journey\u0026rdquo;.\nThe cost of such an infrastructure can vary dramatically, depending on the underlying hypervisor. It depends on the licensing built in, as well as other costs involved in configuring the software for use in a specific environment. Due to the fact that storage is a software service, there is no need for expensive hardware infrastructure, which is an added benefit.\nBuilding a hyperconverged system in a corporate environment is more than just replacing a few devices it requires various aspects and all kinds of IT staff to support it.\nSoftware defined data center solutions manager at Hewlett-Packard, Niel Miles, described \u0026ldquo;software defined\u0026rdquo; as programmatic controls of a company’s infrastructure as it moves forward. Existing technology cannot keep up with the changes, requiring additional software.\nIn Conclusion #Although the concept is only about five years old, there are a few fundamental differences between hyperconverged infrastructure and converged infrastructure. It’s the latest step In pursuing an infrastructure that is easy and cost-effective to manage, and allows you to tidy up a datacenter infrastructure completely.\n","date":"27 February 2018","permalink":"/posts/what-is-hyperconverged-infrastructure/","section":"Posts","summary":"This article looks at Hyperconverged Architecture, what it is and how it can help.","title":"What is Hyperconverged Infrastructure"},{"content":"When it comes to cloud servers and old vs new technology, the concept was usually a difficult one to grasp – until experts started using the popular analogy of pets vs cattle. It helped to perfectly explain the old technology vs the new, and how you can differentiate between the two. It was a vital tool to understand the cloud, and the new way of doing things.\nWith so many confusing terminology and concepts to keep track of, this analogy aims to set the record straight and offer an accurate reference that everyone can use.\nThe Background #Back in 2011, cloud pioneer and member of OpenStack Foundation, Randy Bias, struggled to explain how cloud native apps, AWS, and cloud in general was very different from what it was before. Since most explanations took a lot of time, he wanted something simple and effective, and he did some research – until he came upon a presentation by Bill Baker, where he was focusing mainly on ‘scale-out’ and ‘scale-up’ architectures in general.\nBut most importantly, Bill used the context of comparing pets with cattle when he talked about ‘scale-up’ and ‘scale-out’ technology. When you put pets and cattle in the context of cloud, and focus on the fact that pets are unique and cattle are disposable, it makes a lot of sense.\nIn short, if you see a server as being replaceable, it’s a member of the herd. But if you see a server as indispensable (for e.g. a pair of servers working together as a single unit), it’s a pet. Randy explains it best\nIn the old way of doing things, we treat our servers like pets, for example Bob the mail server. If Bob goes down, it’s all hands on deck. The CEO can’t get his email and it’s the end of the world. In the new way, servers are numbered, like cattle in a herd. For example, www001 to www100. When one server goes down, it’s taken out back, shot, and replaced on the line.\nThis is basically the pitch he would use, word for word.\nUnderstanding Pets and Cattle #Let’s take a minute to clearly define pets and cattle. When we talk about pets, we refer to servers that are seen as irreplaceable, or unique, and basically a system that cannot ever be down. These are typically manually built and managed, and also ‘hand fed’. Some examples can be solitary servers, firewalls, database systems and mainframes.\nWhen we talk about cattle, we refer to collections of more than two servers that are built with automated tools and designed to fail at some point. During failure of these servers, human intervention is not needed as they can route around failures by restarting failed servers or simply replacing them. Some examples of these servers include multi-master datastores, web server arrays, and basically anything that is load balanced. The key to remember here is that failures can and will happen, so every server and every component should be able to fail without impacting the system.\nThe concept has been around for quite a while, as Yale computer scientist David Gelemter used it to explain file systems. He said\nIf you have three pet dogs, give them names. If you have 10,000 head of cattle, don’t bother.\nThis explanation has helped educate various IT professionals, giving them the tools to further explain the old vs the new.\nExpanding on the Analogy #It’s important to stick to the explanation above, or at least start with it, before moving to your own adaptation. Some people have expanded on this analogy and made their own unique version to explain their point – which is perfectly fine – but it can create a bit of confusion.\nHere’s an example, used by the Kubernetes team to explain their \u0026ldquo;Pet Sets\u0026rdquo; addition to their functionality. While they understandably took the pets vs cattle analogy and interpreted it to explain their stateful applications, it was a bit confusing for some. Particularly because they used examples of stateful applications supported in Kubernetes 1.3 using Pet Sets, which are cattle-architecture systems. They are all designed for failure, and by their definition, they now use cattle data stores using Pet Sets.\nIt is important that we don’t confuse people when they try to understand the new technology, how it works and why it is important.\nGetting Value from the Analogy #If you want to take the pets vs cattle analogy and amend it to suit your specific needs, you are certainly free to do so. But just understand where it comes from, how it is used, and how it can help people to understand the complex principle of modern server architecture. It might be a good gesture to acknowledge where the analogy came from and where you draw your inspiration, by referring back to the original blog post for reference and the true history.\nUltimately, focusing on the fact that servers are disposable – a fact that Google actually pioneered – is a very important fact for the pets vs cattle analogy. Using this and focusing on another aspect, or describing something that it is not intended to explain, can add mud to the water and confuse some people on the issue at hand.\nIn Conclusion #By understanding and accurately representing the true origins of this analogy, we will maintain its value to those new to the concept of how computing is now delivered. Cloud technology is undoubtedly the way of the future, and explaining this correctly will make all the difference.\n","date":"20 February 2018","permalink":"/posts/pets-vs-cattle-analogy-explained/","section":"Posts","summary":"This article explains the Pets vs cattle analogy when describing server infrastructure in IT.","title":"Pets vs Cattle Analogy Explained"},{"content":"Serverless architecture is often referred to as Function as a Service (FaaS) or serverless computing, and it is widely used for applications that are deployed in the cloud. With serverless architecture, there is no need for server hardware and software to be managed by the developer, as these applications are dependent on third party software.\nIn a serverless environment, applications divided into individual functions, and these can be scaled and invoked individually. It’s a powerful solution for many application developers, but it’s important to understand exactly what it is, and what the possible vulnerabilities can be.\nServerless technology is already a popular topic in the software world, and there are many vendor products, books and open source frameworks dedicated to this. Its use has become very popular solution for many organizations deploying cloud applications, with even some of the traditionally conservative organizations using some form of serverless technologies.\nThis software trend delivers the scaling necessary and reduces time-to-market for a reliable, effective application platform. Just think Uber, Airbnb and Instagram – they all have large user databases and real-time data that functions seamlessly due to serverless architecture. And between Google’s Play Store and Apple’s App Store, there are more than four million apps competing for attention, making serverless architecture a great way to gain a competitive advantage and reduce development costs, which can easily top six figures. The term ‘serverless’ has received some backlash, as it implies that there are no servers at all, but in fact that are naturally still servers running in the background. The difference is that they are managed by vendors but you don’t have access to change or manage them. That’s also why many feel it should be referred to as Function as a Service.\nThe Benefit of Serverless Architecture #When you think of software applications being hosted on the Internet, it usually means that you need to have some sort of server infrastructure. This typically means either a physical or virtual server that needs to be managed, including all the different hosting processes and operating system that it needs for your application to run. Using a virtual server from providers such as Microsoft or Amazon, you can eliminate any hardware issues, but you’ll still have to manage the server software and operating system.\nWhen you move to serverless architecture, you focus only on the application code’s individual functions. Popular services like Microsoft Azure Functions, AWS Lambda and Twilio Functions all take care of the physical hardware, the web server software, and the operating system This means you only need to focus on the code.\nHere are a few great benefits of using serverless architecture:\nBetter scalability. Developers all want their apps to be successful, but if it does happen, they need to make sure they can handle it. That’s why provisioning infrastructure is a great choice to make, as you will be prepared when success strikes.\nReduce time to market. Developers can now create apps within days or even hours, instead of weeks and months. There are many new apps that rely on third-party APIs including social channels like Twitter, maps like Mapbox, and authentication like OAuth.\nLower developer cost. Serverless architecture significantly reduces the need for human resources and computing power. Servers don’t need to be so expensive anymore; plus, if you don’t need always-on servers, your running costs will reduce even more.\nServerless architecture also allows for faster innovation, and this means product engineers can innovate at a rapid speed since this technology reduces any system engineering problems. This means less time for operations, and a smoother application. Product engineers can now rather focus their attention on developing the business logic of the application.\nHaving access to out-of-the-box scalability is one of the major reasons why developers use serverless architecture. Costs are kept to a minimum, as you are basically only paying when something happens, i.e. a user takes a certain action. Generally speaking, this is a great solution for most developers looking for a cost-effective solution.\nPossible Drawbacks #Serverless architecture remains one of the best technologies yet, but it’s worth noting that it may in some cases have slight drawbacks that developers should be aware of.\nHere are a few aspects to consider:\nComplex architecture. It might be challenging to manage too many functions simultaneously, especially since it can take time to decide how small every function should be. There needs to be a balance to the amount of functions that can be called by an application. AWS Lambda, for example, has limits as to how many concurrent executions you can run of your lambdas.\nNot enough operational tools. Developers rely on vendors to provide monitoring and debugging tools. Debugging systems can be difficult, and will require access to a lot of relevant information to help identify the root cause.\nImplementation testing. Integration tests can be tough to implement. The units of integration, or function, is smaller than with other architectures, and this means developers rely much more on integration testing that with other architectures. There can also be problems with versioning, deployment and packaging.\nThird-party API system problems. Some of the problems due to the use of third-party APIs can include vendor lock-in, multi-tenancy problems, vendor control, and security issues. Giving up system control while APIs are implemented can cause loss of functionality, system downtime and unexpected limits.\nIn Conclusion #With serverless technology, applications can be built faster, and scaled more effectively. Additional computing power can be assigned automatically, and there is no need for developers to monitor and maintain complex servers.\nServerless architecture can accommodate a wide range of developing needs. From connected aircraft engines to file-sharing apps - data continues to grow and evolve, and serverless will become the standard in development and execution of various functions.\nBy significantly reducing development and management costs, serverless architecture is set to completely take over the software architecture space.\n","date":"1 February 2018","permalink":"/posts/what-is-serverless-architecture/","section":"Posts","summary":"This article describes what serverless architecture is and how it can be used.","title":"What is Serverless Architecture"},{"content":"Over the last few years, Docker has relied on their own container management system to not only form the roadmap of their company, but also attract high dollar investors. But this all changed as the company announced their support of Kubernetes at DockerCon Europe 2017 in Copenhagen. With Docker being the leading platform for software containerization, this announcement shows just how valuable Kubernetes are in the container orchestration space.\nDocker has always focused on the developer, offering the ability to use a standard framework to build, ship and run applications. Their primary platform to orchestrate containers is Docker Swarm., which also offers a close integration with Docker Enterprise Edition. With the integration of Kubernetes, Swarm offers value-added capabilities above Kubernetes.\nOrganizations will now be able to make use of Kubernetes, while still relying on Docker’s various management features, including security scanning. In addition to Windows and Linux, the system will also be compatible with a variety of Docker-certified container images.\nDocker and Kubernetes have been competing against each other since 2015, making this move even more genius. In 2016, Docker partnered with Microsoft and brought its container runtime to the Azure cloud platform, gaining a lot of Windows platform support.\nSo, Why Kubernetes? #Kubernetes, also referred to as k8s, was originally developed by Google, and is now hosted by the Cloud Native Computing Foundation (CNCF). It’s an open source platform that aims to enhance cloud native technology development by using a new set of container technologies. With Kubernetes, you can deploy and schedule container applications in both virtual and physical environments, making it a leading container orchestration engine.\n“We’re embracing Kubernetes into our product line. We’re bringing Kubernetes into Docker Enterprise Edition as a first-class orchestrator right alongside Docker Swarm,” said Scott Johnston, Chief Operating Officer of Docker. He also mentioned that they will be integrating Kubernetes into their Mac and Windows products. Steve Singh, Chief Executive Officer of Docker, believes that embracing Kubernetes will rule out potential conflicts, and that they want customers to have a choice between using Swarm or Kubernetes, or both. \u0026ldquo;Our hope is that every application company in the world builds and delivers their products on the Docker platform in Docker containers,\u0026rdquo; Singh said.\nBut Kubernetes offer far more; it has many capabilities specifically for orchestration, including load balancing, service discovery and horizontal scaling. It also gives organizations the ability to have a flexible platform to execute their workloads in the cloud, or on-site, without the need for any application layer changes. Kubernetes also has a very large developer community, making it one of the fastest growing open source projects in the world.\nContainer Technology is Growing #Container technology is growing rapidly every year, with the market expecting to grow around 40 percent every year, to an impressive $2.7 billion by 2020. Experts believe that a big factor in this growth potential is the fact that organizations are incorporating containers specifically due to their portability, which reduces costs and offers better infrastructure utilization.\nKubernetes are fast becoming the central container orchestration engine for various leading cloud providers such as IBM, Google, Pivotal, Oracle, Microsoft, and Red Hat. Most industry leaders in Platform-as-a-service (PaaS) and Infrastructure-as-a-Service (IaaS) have also joined CNCF, making Kubernetes part of their service offering.\nGoing forward, every six months Kubernetes will be updated, beginning with version 1.8 that is included in Docker’s Enterprise Edition. For desktop users who often use Docker, the Windows and Mac versions will be taken directly from the master, ensuring that features are always developed in a timely manner without any complications.\nIt’s also interesting to note that Solomon Hykes, one of the founding members of Docker, is also on the technical committee of the CNCF, the group that manages containerd, Linkerd, and Kubernetes, along with a few other popular container-focused projects. Hykes is a founding member and has been contributing to various CNCF projects.\n“We’re already active. With this announcement, that’s going to continue and accelerate. We intend to be first class citizens and participate as full class members,” said Johnston.\nJohnston also noted that they are working with a security team that was acquired from Square a few years ago, and they are handling most of the security work for Docker Enterprise Edition. The team constantly improves the overall security of the platform, and will continue to do so.\nSwarm and Kubernetes: Side by Side #Docker decided to provide a design that allows for the simultaneous running of Swarm and Kubernetes in the same cluster. When Swarm is deployed, an option is provided to also install Kubernetes, which will then take on the redundancy design of the Swarm install.\nHykes said that developers who use Docker won’t have to learn new tools for Kubernetes. Rather, complete Kubernetes distribution will be built-in with the next version of Docker, allowing developers to use the same tools they have always used.\n\u0026ldquo;You can just keep developing and it just works, and if you do want to use Kubernetes tools, Docker is a good distribution, so you get the best of all worlds,\u0026rdquo; Hykes said.\nWhen looking at resources, it might be challenging as both Swarm and Kubernetes can run on one host, each being unaware of the other. This means that each orchestrator will adapt complete use of a single host, which is why Docker does not recommend that both be run on the same host.\nWith Kubernetes now being the modern standard for container orchestration, Docker made the perfect decision to support Kubernetes. Instead of competing, it is embracing the technology and offering their clients exactly what they want – developer tools that are easy to work with.\nIn Conclusion #This is definitely a very important moment for the container ecosystem, as Docker remains a leader when it comes to container-based development. With availability expected Q1 2018, and integration of Kubernetes with Docker EE, Docker is not only a leading development platform, but also serves as a production-level platform that can compete with PaaS solutions.\n","date":"25 January 2018","permalink":"/posts/docker-embracing-kubernetes/","section":"Posts","summary":"In this article we look at how Docker has embraced Kubernetes.","title":"Docker Embracing Kubernetes"},{"content":"","date":null,"permalink":"/tags/certificates/","section":"Tags","summary":"","title":"Certificates"},{"content":"","date":null,"permalink":"/tags/ssl/","section":"Tags","summary":"","title":"Ssl"},{"content":"This document will guide you through creating a Certificate Signing Request (CSR) with Subject Alternative Names (SAN).\nGetting Started #These instructions have been run on a RHEL Linux system.\nSAN stands for \u0026ldquo;Subject Alternative Names\u0026rdquo; and this helps you to have a single certificate for multiple CN (Common Name). In SAN certificate, you can have multiple complete CN.\nFor example:\nexample.com exmaple.net example.org You can have the above domains and more in a single certificate. One use case for this is loadbalancing, the Virtual IP could be the CN and then the hosts behind the LB would be the SAN entries.\nNext we look at a real life example of wikipedia.org, which has many SAN entries in a single certificate.\nScreensot of Wikipedia SAN As you can see in the screenshot there are multiple SAN entries for the wikipedia.org URL.\nPrerequisites #A working installation of OpenSSL [root@server ~]# yum install openssl\nCreate CSR Config #Create a directory to hold the CSR, Key and eventually the Certificate [user@server ~]$ cd /tmp [user@server ~]$ mkdir /tmp/san_cert [user@server ~]$ cd /tmp/san_cert\nCreate a file called san.cnf [user@server ~]$ touch /tmp/san_cert/san_cert.cnf [user@server ~]$ vi /tmp/san_cert/san_cert.cnf\nAdd the following content to the /tmp/san_cert/san_cert.cnf file [ req ] default_bits = 2048 distinguished_name = req_distinguished_name req_extensions = v3_req prompt = no [ req_distinguished_name ] countryName = DE stateOrProvinceName = BY localityName = Munich organizationName = SomeCompany organizationalUnitName = SomeUnit commonName = vip.example.com emailAddress = user@example.com [ v3_req ] subjectAltName = @alt_names [alt_names] DNS.1 = vip.example.com IP.1 = 192.0.2.10 DNS.2 = host01.example.com IP.2 = 192.0.2.20 DNS.3 = host02.example.com IP.3 = 192.0.2.30\nTo add additional SAN records, add to the alt_names section and save the file\nCreate the CSR #Execute the following OpenSSL command, which will generate CSR and KEY file [user@server ~]$ openssl req -out /tmp/san_cert/san_cert.csr -newkey rsa:2048 -nodes -keyout /tmp/san_cert/san_cert_private.key -config /tmp/san_cert/san_cert.cnf\nThis will create san_cert.csr and san_cert_private.key in the /tmp/san_cert/ directory. You have to send san_cert.csr to certificate signing authority so they can generate and provide you the certificate with SAN attributes.\nTesting #Verify the CSR #You can verify the CSR has been created with the SAN attributes by running the following command, the output should list DNS and IP entries, if nothing is returned there is a problem with the cnf file. [user@server ~]$ openssl req -noout -text -in /tmp/san_cert/san_cert.csr | grep DNS DNS:vip.example.com, IP Address:192.0.2.10, DNS:host01.example.com, IP Address:192.0.2.20, DNS:host02.example.com, IP Address:192.0.2.30\n","date":"10 January 2018","permalink":"/posts/ssl-certificates-with-san-attributes/","section":"Posts","summary":"This technical article will show you how to create an Certificate Signing Request with SAN attributes.","title":"SSL Certificates with SAN Attributes"},{"content":"Moving to Hugo #I recently decided to migrate my technical blog to Hugo. The main reason for this change was the ability to manage all my posts using GitHub, leveraging version control for better document organization and management. As part of this migration, all articles needed to be converted into Markdown.\nPreviously, my blog was hosted on WordPress. While it served me well for years, I found it increasingly cumbersome for my needs. Running WordPress involves significant overhead, relying on Apache, PHP, and MySQL just to serve static content. In contrast, Hugo uses a lightweight binary to generate static HTML files from markdown, which can then be served by Apache or any other web server.\nBoth platforms have their advantages, but for me, reducing resource overhead and improving version control were decisive factors.\nSwitching to markdown brings additional benefits:\nA standard format for all articles. Easy output conversion to formats like PDF, Word, HTML, and more. Looking ahead, I plan to share more detailed insights into my Hugo-based setup and its implementation.\n","date":"9 January 2018","permalink":"/posts/migration-to-hugo/","section":"Posts","summary":"Learn why I switched from WordPress to Hugo, emphasizing markdown-based content, GitHub integration, and improved efficiency for static websites.","title":"Migration to Hugo"},{"content":"A common method of stakeholder analysis is a Stakeholder Matrix. This is where stakeholders are plotted against two variables. These variables can be the importance of the stakeholder against their influence.\nMatrix Diagram # Screensot of a Stakeholder Matrix Boxes A, B and C are the key stakeholders of the project. Each box is summarised below:\nBox A #These are stakeholders who have a high degree of influence on the project and who are also of high importance for its success. Good working relationships with these stakeholders must be made.\nBox B #These are stakeholders of high importance to the success of the project, but with low influence. These are stakeholders who might be beneficiaries of a new service, but who have little ‘voice’ in its development.\nBox C #These are stakeholders with high influence, who can therefore affect the project outcomes, but whose interests are not necessarily aligned with the overall goals of the project.\nBox D #The stakeholders in this box, with low influence, or importance to the project objectives, may require limited monitoring or evaluation, but are of low priority.\nHow to Use # Make a list of all stakeholders. Write the name of each stakeholder on a post-it note or index card. Rank the stakeholders on a scale of one to five, according to one of the criteria on the matrix, such as \u0026lsquo;interest in the project outcomes\u0026rsquo; or \u0026lsquo;interest in the subject\u0026rsquo;. Keeping this ranking for one of the criteria, plot the stakeholders against the other criteria of the matrix. This is where using post-it notes or removable cards are useful. Ask the following questions: Are there any surprises? Which stakeholders do we have the most/least contact with? Which stakeholders might we have to make special efforts to ensure engagement? ","date":"5 December 2017","permalink":"/posts/stakeholder-matrix/","section":"Posts","summary":"Discover how to use a stakeholder matrix to analyze and manage project stakeholders. This article explains the matrix structure, its key elements, and steps to identify and engage stakeholders based on their importance and influence.","title":"Stakeholder Matrix"},{"content":"This document will show you how to set the RHEL7 hostname (Redhat 7 or CentOS 7) based machine. If you have built your new RHEL7 based machine and have now got a bit stuck over how to change the hostname from localhost.localdomain to what ever you want this is the how to for you.\nCheck Installed Kernels #The command below will list all kernels that are currently installed on the system\n[root@server ~]# rpm -q kernel kernel-3.10.0-514.el7.x86_64 kernel-3.10.0-514.6.1.el7.x86_64 kernel-3.10.0-693.5.2.el7.x86_64 kernel-3.10.0-693.11.1.el7.x86_64 kernel-3.10.0-693.11.6.el7.x86_64 The uname command will show which kernel is currently Running\n[root@server ~]# uname -r 3.10.0-693.11.6.el7.x86_64 Remove Old Kernels #Next we will install the yum-utils package which contains the tools we need to limit the number of installed kernels.\nInstall Utilities #[root@server ~]# yum install yum-utils Set Kernels to Keep #Package-cleanup is used to set how many packages will be kept. The command below sets 2 old kernels to be kept.\n[root@server ~]# package-cleanup --oldkernels --count=2 Loaded plugins: fastestmirror --\u0026gt; Running transaction check ---\u0026gt; Package kernel.x86_64 0:3.10.0-514.el7 will be erased ---\u0026gt; Package kernel.x86_64 0:3.10.0-514.6.1.el7 will be erased ---\u0026gt; Package kernel.x86_64 0:3.10.0-693.5.2.el7 will be erased --\u0026gt; Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Removing: kernel x86_64 3.10.0-514.el7 @anaconda 148 M kernel x86_64 3.10.0-514.6.1.el7 @updates 148 M kernel x86_64 3.10.0-693.5.2.el7 @updates 59 M Transaction Summary ================================================================================ Remove 3 Packages Installed size: 355 M Is this ok [y/N]: y Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Erasing : kernel.x86_64 1/3 Erasing : kernel.x86_64 2/3 Erasing : kernel.x86_64 3/3 Verifying : kernel-3.10.0-693.5.2.el7.x86_64 1/3 Verifying : kernel-3.10.0-514.6.1.el7.x86_64 2/3 Verifying : kernel-3.10.0-514.el7.x86_64 3/3 Removed: kernel.x86_64 0:3.10.0-514.el7 kernel.x86_64 0:3.10.0-514.6.1.el7 kernel.x86_64 0:3.10.0-693.5.2.el7 Complete! Kernel Count Check #Next check how many kernels have been left installed, it should be 2\n[root@server ~]# rpm -q kernel kernel-3.10.0-693.11.1.el7.x86_64 kernel-3.10.0-693.11.6.el7.x86_64 Update Installed Kernels Permanently #Next we need to set the number of kernels to stay at two permanently.\nEdit /etc/yum.conf or /etc/dnf/dnf.conf and set installonly_limit:\ninstallonly_limit=2 Thats it, now when ever we update the system, there will only be the last two kernels left on the system.\n","date":"28 November 2017","permalink":"/posts/kernel-cleanup-using-yum/","section":"Posts","summary":"This technical article describes the process to use Yum to cleanup old and unused kernels on a RHEL based system.​","title":"Kernel Cleanup Using YUM"},{"content":"","date":null,"permalink":"/tags/yum/","section":"Tags","summary":"","title":"Yum"},{"content":"","date":null,"permalink":"/tags/lvm/","section":"Tags","summary":"","title":"Lvm"},{"content":"This procedure is how to move data from one Physical Volume to another in a LVM configuration on a RHEL based system.\nAcronyms # Acronym Meaning PV Pysical Volume LV Logical Volume VG Volume Group Highlevel Procedure # Check Current Configuration (Using Multipath/powermt) Check Space on Existing LUNs and VGs Configure/Present LUN to server Scan for LUN on server Add LUN to PV Extend VG to include new LUN Check new LUN and VG have enough space to migrate Migrate data from one PV to new PV Remove old PV from VG Check VG has correct LUNs Detailed Procedure #For the example below, it is assumed 3 LUNs will be used and 1 will be updated/swapped.\nLUN Presentation #Confirm LUN is presented to new server from the storage.\nLUN Rescan #Rescan for presented LUNs. Check how many fibre channels are on the system. [root@server ~]# ls /sys/class/fc_host host0 host1 host2 host3 Perform rescan on each fc port/host. [root@server ~]# echo \u0026#34;1\u0026#34; \u0026gt; /sys/class/fc_host/host0/issue_lip [root@server ~]# echo \u0026#34;- - -\u0026#34; \u0026gt; /sys/class/scsi_host/host0/scan [root@server ~]# echo \u0026#34;1\u0026#34; \u0026gt; /sys/class/fc_host/host1/issue_lip [root@server ~]# echo \u0026#34;- - -\u0026#34; \u0026gt; /sys/class/scsi_host/host1/scan [root@server ~]# echo \u0026#34;1\u0026#34; \u0026gt; /sys/class/fc_host/host2/issue_lip [root@server ~]# echo \u0026#34;- - -\u0026#34; \u0026gt; /sys/class/scsi_host/host2/scan [root@server ~]# echo \u0026#34;1\u0026#34; \u0026gt; /sys/class/fc_host/host3/issue_lip [root@server ~]# echo \u0026#34;- - -\u0026#34; \u0026gt; /sys/class/scsi_host/host3/scan [root@server ~]# cat /proc/scsi/scsi | egrep -i \u0026#39;Host:\u0026#39; | wc -l\nRestart Multipathd (If Used) #Restart Multipathd when scan has been completed. [root@server ~]# service multipathd restart Check Multipath has new routes for newly presented LUNs and both paths are active. [root@server ~]# multipath -ll\nRescan Powerpath (If Used) #Rescan powerpathd [root@server ~]# powermt config Check powerpath has new routes for newly presented LUNs and both paths are active. [root@server ~]# powermt display dev=all\nCheck Current LUNs #Check the current LUNs in the VG. [root@server ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpatha vg_test01 lvm2 a-- 1023.00m 0 /dev/mapper/mpathb vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathc vg_test01 lvm2 a-- 1020.00m 0\nAdd New Luns #If the scan was successful add the new LUNs. [root@server ~]# pvcreate /dev/mapper/mpathd\nCheck Current LUNs #As you can see below it is added, it is not currently assigned to a VG. [root@server ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpatha vg_test01 lvm2 a-- 1023.00m 0 /dev/mapper/mpathb vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathc vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathd lvm2 --- 1020.00m 1023.00m\nAdd PV to VG #Add newly added PV to VG. [root@server ~]# vgextend vg_name /dev/mapper/mpathd\nCheck VG #Make sure you can see the PV As you can see below it is added, and now it is assigned to a VG. Also you can see the new lun /dev/mapper/mpathd has free space. [root@server ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpatha vg_test01 lvm2 a-- 1023.00m 0 /dev/mapper/mpathb vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathc vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathd vg_test01 lvm2 a-- 1020.00m 1023.00m\nMigrate Data #Now to move the data from an old LUN to the new LUN [root@server ~]# pvmove /dev/mapper/mpatha /dev/mapper/mpathd /dev/mapper/mpatha: Moved: 0.39% /dev/mapper/mpatha: Moved: 38.04% /dev/mapper/mpatha: Moved: 75.69% /dev/mapper/mpatha: Moved: 100.00%\nCheck VG #Check the data has moved. You can now see the old lun /dev/mapper/mpatha is now the one with the free space and /dev/mapper/mpathd isn\u0026rsquo;t 100% free. [root@server ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpatha vg_test01 lvm2 a-- 1020.00m 1020.00m /dev/mapper/mpathb vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathc vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathd vg_test01 lvm2 a-- 1020.00m 0\nRemove Old LUN #Now remove the old lun from the VG. Make sure this is the LUN with 100% free. [root@server ~]# vgreduce vg_test01 /dev/mapper/mpatha\nCheck VG #Check the old LUN has been disassociated from the VG. [root@server ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpatha lvm2 --- 1023.00m 1023.00m /dev/mapper/mpathb vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathc vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathd vg_test01 lvm2 a-- 1020.00m 0\nRemove Old PV #Remove the old LUN [root@server ~]# pvremove /dev/mapper/mpatha\nCheck VG #Check the old LUN /dev/mapper/mpatha has been removed [root@server ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpathb vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathc vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathd vg_test01 lvm2 a-- 1020.00m 0\n","date":"17 October 2017","permalink":"/posts/lvm-migration/","section":"Posts","summary":"This technical article covers how to migrate data from one LVM to a new LVM on RHEL based systems.​","title":"LVM Migration"},{"content":"OpenSSL is one of the most versatile SSL tool. It is an open source implementation of the SSL protocol. OpenSSL is usually used to create a CSR (Certificate Signing Request) and Private Keys. It also has a lot of different functions that allow you to view the details of a CSR, Key or Certificate and convert the certificate to different formats.\nListed below are the most common OpenSSL commands and their usage:\nGeneral OpenSSL Commands #These commands enable generation of Private Keys, CSRs and Certificates.\nGenerate a new Private Key and Certificate Signing Request #[root@server ~]# openssl req -out csr.csr -new -newkey rsa:2048 -nodes -keyout privatekey.key Generate a self-signed certificate #[root@server ~]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privatekey.key -out certificate.crt Generate a certificate signing request (CSR) for an existing private key #[root@server ~]# openssl req -out csr.csr -key privatekey.key -new Generate a certificate signing request based on an existing certificate #[root@server ~]# openssl x509 -x509toreq -in certificate.crt -out csr.csr -signkey privatekey.key Remove a passphrase from a private key #[root@server ~]# openssl rsa -in privatekey.pem -out newprivatekey.pem Checking Using OpenSSL #These commands enable checking of information within a Private Key, CSR or Certificate.\nCheck a Certificate Signing Request (CSR) #[root@server ~]# openssl req -text -noout -verify -in csr.csr Check a private key #[root@server ~]# openssl rsa -in privatekey.key -check Check a certificate #[root@server ~]# openssl x509 -in certificate.crt -text -noout Check a PKCS#12 file (.pfx or .p12) #[root@server ~]# openssl pkcs12 -info -in keystore.p12 Debugging Using OpenSSL #These commands enable debugging of Private Keys, CSRs and Certificates.\nCheck the MD5 hash of a Public Key to ensure it matches the contents of the CSR or Private Key #[root@server ~]# openssl x509 -noout -modulus -in certificate.crt | openssl md5 openssl rsa -noout -modulus -in privatekey.key | openssl md5 openssl req -noout -modulus -in csr.csr | openssl md5 Check an SSL connection. All the Certificates (including Intermediates) should be displayed #[root@server ~]# openssl s_client -connect www.google.com:443 Converting Using OpenSSL #These commands allow you to convert Keys and Certificates to different formats to make them compatible with specific types of servers or software. For example, you can convert a normal PEM file that would work with Apache to a PFX (PKCS#12) file and use it with Tomcat or IIS.\nConvert a DER file (.crt .cer .der) to PEM #[root@server ~]# openssl x509 -inform der -in certificate.cer -out certificate.pem Convert a PEM file to DER #[root@server ~]# openssl x509 -outform der -in certificate.pem -out certificate.der Convert a PKCS#12 file (.pfx .p12) containing a Private Key and Certificates to PEM #[root@server ~]# openssl pkcs12 -in keystore.pfx -out keystore.pem -nodes You can add -nocerts to only output the private key or add -nokeys to only output the certificates.\nConvert a PEM Certificate file and a Private Key to PKCS#12 (.pfx .p12) #[root@server ~]# openssl pkcs12 -export -out certificate.pfx -inkey privatekey.key -in certificate.crt -certfile cacert.crt ","date":"31 December 2015","permalink":"/posts/common-openssl-commands/","section":"Posts","summary":"This guide outlines essential OpenSSL commands, including generating keys, creating CSRs, self-signed certificates, and converting formats. Learn to verify and debug certificates efficiently.","title":"Common OpenSSL Commands"},{"content":"This how-to will show you how to disable RHEL7 IPv6. IPv6 is enabled by default on a standard install of RHEL 7. It is pretty much the same method to disable IPv6 on Redhat 7 as it is Redhat 6. Below are the details on how to do this.\nCheck IPv6 #Check that IPv6 is actually configured.\n[root@server ~]# ip addr 1: lo: \u0026amp;lt;LOOPBACK,UP,LOWER_UP\u0026amp;gt; mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: \u0026amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP\u0026amp;gt; mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:19:5e:64:03:09 brd ff:ff:ff:ff:ff:ff inet 192.168.0.2/24 brd 192.168.0.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::115:8dff:fd64:409/64 scope link valid_lft forever preferred_lft forever Disable IPv6 #To disable IPv6 run the following commands.\n[root@server ~]# sysctl -w net.ipv6.conf.all.disable_ipv6=1 net.ipv6.conf.all.disable_ipv6 = 1 [root@server ~]# sysctl -w net.ipv6.conf.default.disable_ipv6=1 net.ipv6.conf.default.disable_ipv6 = 1 Check IPv6 is Disabled #Check that IPv6 is actually configured.\n[root@server ~]# ip addr 1: lo: \u0026amp;lt;LOOPBACK,UP,LOWER_UP\u0026amp;gt; mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: eth0: \u0026amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP\u0026amp;gt; mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:19:5e:64:03:09 brd ff:ff:ff:ff:ff:ff inet 192.168.0.2/24 brd 192.168.0.255 scope global eth0 valid_lft forever preferred_lft forever Disable IPv6 Permanently #To disable IPv6 permanently you need to add the commands to the networking config.\nEdit the following file\n[root@server ~]# vi /etc/sysctl.conf add the following content to the end of the file\n# Disable IPv6 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 That should be it, IPv6 will now be disabled at boot time on your Redhat 7 \u0026amp; CentOS 7 system.\n","date":"28 July 2014","permalink":"/posts/disable-ipv6-rhel7/","section":"Posts","summary":"This technical post goes through the steps needed to disable IPv6 on a Redhat 7 Based System","title":"Disable IPv6 RHEL7"},{"content":"","date":null,"permalink":"/tags/networking/","section":"Tags","summary":"","title":"Networking"},{"content":"Overview #This HowTo will provide some information about the different types of hostnames and how to set them in RHEL7 (Redhat 7 or CentOS 7) based machine. If you have built your new RHEL7 based machine and have now got a bit stuck over how to change the hostname from localhost.localdomain to what ever you want this is the how to for you.\nTypes Of Hostnames #There are three types of hostnames: Static, Pretty and Transient.\nStatic Hostname #The Static hostname is essentially the traditional hostname which is stored in the \u0026ldquo;/etc/hostname\u0026rdquo; file.\n[user@server ~]$ cat /etc/hostname server.example.com Transient Hostname #The Transient hostname is a dynamic hostname which is maintained at a kernel level. It is initialized by the static hostname, but can be changed by DHCP and other network services.\nPretty Hostname #The Pretty hostname is a free form hostname for presentation to the user.\nSet The Hostname #The hostname can be changed by editing the \u0026ldquo;/etc/hostname\u0026rdquo; file or with the hostnamectl command.\n[root@server ~]# hostnamectl set-hostname server-test.example.com This command will set all three hostnames at the same time, but all three can be set individually using the \u0026ldquo;-static\u0026rdquo;, \u0026ldquo;-transient\u0026rdquo; or \u0026ldquo;-pretty\u0026rdquo; flags.\nValidate \u0026ldquo;/etc/hostname\u0026rdquo; has been updated\n[root@server ~]# cat /etc/hostname server-test.example.com Hostname Information #You can see a bit of information with the hostnamectl command which is very useful. man hostnamectl\n[root@server ~]# hostnamectl Static hostname: server.domain.tld Icon name: computer-vm Chassis: vm Machine ID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Boot ID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Virtualization: vmware Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-123.4.4.el7.x86_64 Architecture: x86_64 ","date":"13 July 2014","permalink":"/posts/set-hostname-rhel7/","section":"Posts","summary":"This technical article walks through how to set the hostname of a server running Redhat 7 OS.","title":"Set RHEL7 Hostname"},{"content":"","date":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories"}]