Linux Bash

Providing immersive and explanatory content in a simple way anybody can understand.

  • Posted on
    Featured Image
    In the realm of Linux, security is a top priority, and one of the innovative tools for enhancing security is firejail. This sandboxing tool limits the scope of program operations using Linux namespaces and seccomp-bpf, which stands for Secure Computing Mode with Berkeley Packet Filter. Primarily, it's used to restrict the system calls that a process can execute. In this blog, we will explore how firejail can be used to restrict a script's access to specific syscalls. Q: Can you explain what firejail is and why it's useful? A: Firejail is a sandboxing tool that uses Linux namespaces and seccomp technology to restrict the running environment of untrusted applications.
  • Posted on
    Featured Image
    Sudo, one of the most common utilities on Unix-like operating systems, enables users to run programs with the security privileges of another user, typically the superuser. Effective monitoring of sudo usage is critical in system administration for maintaining security and ensuring that users are accountable for their privileged operations. In this article, we'll explore how you can use bash scripts to parse /var/log/secure to audit all sudo invocations in real time, enhancing security oversight in Linux environments. Q&A: Real-Time sudo Invocation Auditing A1: /var/log/secure is a log file on Linux systems that records authentication and authorization information, including sudo command usage.
  • Posted on
    Featured Image
    When it comes to deleting sensitive files, simply removing them using the rm command in Linux doesn't guarantee that the files are unrecoverable. The data remains on the disk and could potentially be restored using data recovery tools. This is where the shred command becomes invaluable, especially for those who need to ensure that their confidential or sensitive data is irrecoverable. Q&A: Using shred -u for Secure File Deletion Q1: What does the shred command do? A1: shred is a command in Linux that overwrites a file to hide its contents and optionally deletes it. It makes the recovery of the data more difficult by using multiple overwriting passes.
  • Posted on
    Featured Image
    In the realms of cybersecurity and data integrity, the signing and verification of files to confirm their authenticity and integrity is paramount. This mechanism ensures that files have not been tampered with and originate from a verified source. With the evolution of Bash and its associated tools, a newer, efficient command ssh-keygen -Y has been introduced, providing users with the capability to utilize SSH keys for these purposes. Q&A on Using ssh-keygen -Y in Bash 8+ A1: The ssh-keygen -Y command is a feature in newer versions of SSH utilities that allows users to sign files with their private SSH keys and verify those signatures using corresponding public keys.
  • Posted on
    Featured Image
    Linux offers an array of powerful tools for network operations, one of which is the lesser-known pseudo-device /dev/tcp. This tool can be used directly from the Bash shell to interact with TCP sockets. In today's post, we will explore how to implement a basic port scanner using /dev/tcp and handle connection timeouts to make the script more efficient and user-friendly. Q&A on Implementing a Port Scanner with /dev/tcp and Timeout Handling Q1: What is /dev/tcp and how does it work? A1: /dev/tcp is a pseudo-device in Linux, which is part of the Bash shell's built-in mechanisms. It allows you to open a connection to a specific TCP port on a host. You can use it to check if the port is open by redirecting output or input to this device.
  • Posted on
    Featured Image
    Today, we'll uncover how to generate a Time-based One-Time Password (TOTP) straight from your Linux terminal using openssl and date +%s. This guide is aimed at enhancing your understanding of cybersecurity measures like two-factor authentication (2FA) while providing a practical example using common Linux tools. Q&A on Generating a TOTP Token in Bash A1. A Time-based One-Time Password (TOTP) token is a temporary passcode used in two-factor authentication systems. It combines something the user knows (a secret key) with something the user has (typically, a time source) to produce a password that changes every 30 seconds. Q2. Why use openssl and date +%s in Bash for generating a TOTP token? A2.
  • Posted on
    Featured Image
    Secure communication over the network is essential, especially when sensitive data is transmitted between a client and a server. Using tools like socat, a multipurpose relay for bidirectional data transfer, we can create secure pathways with features like TLS (Transport Layer Security), ensuring that the data remains private and integral. This blog article will cover how to use socat to set up a TLS tunnel with mutual authentication, ensuring both the client and the server verify each other's identities before establishing a connection.
  • Posted on
    Featured Image
    When working in the Linux environment, encountering hexdumps is usual, especially for those dealing with system level programming or network security. An often-asked question is how to efficiently convert these hexdumps back to their binary form. Here, we explore the streamlined command xxd -r -p, perfect for tasks needing a binary format without extra formatting like line breaks. A hexdump is a hexadecimal format (base 16) display of binary data. It is commonly used in debugging or inspecting data that doesn't lend itself well to being displayed in human-readable formats. A hexdump couples hexadecimal data representation with potentially corresponding ASCII characters (or '.' for non-printable characters).
  • Posted on
    Featured Image
    In the vast toolbox of Linux command-line utilities, pr stands out when you need to process text for printing or viewing in formatted ways. While typically used for preparing data for printing, it can be repurposed for various other tasks, such as merging multiple text files side-by-side. In this blog, we'll explore how to use the pr command specifically with the -m and -t options to merge files side by side without headers, offering both an easy guide and practical examples. Q&A: Merging Files with pr -m -t A1: The pr command in Linux is used to convert text files for printing. It can format plain text in a variety of ways, such as pagination, columnar formatting, and header/footer handling.
  • Posted on
    Featured Image
    Anyone who uses Git knows that git log can provide a powerful glimpse into the history of a project. However, analyzing this data can be cumbersome without the proper tools to parse and structure this output. This blog post aims to guide you through using awk along with regular expressions (regex) to turn the git log output into a neatly structured CSV file. Q1: What requirements should I meet before I start? A: Ensure you have Git and awk installed on your Linux system. awk is typically pre-installed on most Linux distributions, and Git can be installed via your package manager (e.g., sudo apt install git on Debian/Ubuntu). A: You can customize your git log output format using the --pretty=format: option.
  • Posted on
    Featured Image
    While the typical go-to command for splitting files in Linux is split, you may encounter scenarios where split isn't available, or you require a method that integrates more tightly with other shell commands or scripts. The dd command, known for its data copying capabilities, offers a powerful alternative for splitting files by using byte-specific operations. Q&A: Splitting Files Using dd A1: The dd command in Linux is a versatile utility used for low-level copying and conversion of raw data. It can read, write, and copy data between files, devices, or partitions at specified sizes and offsets, making it valuable for tasks such as backing up boot sectors or exact block-level copying of devices.
  • Posted on
    Featured Image
    Welcome to our guide on using the iconv command for converting accented characters to ASCII in Linux Bash. In this blog, we'll explore the functionality of iconv, particularly focusing on transliteration as part of text processing in pipelines. Q1: What is iconv? A1: iconv is a command-line utility in Unix-like operating systems that converts the character encoding of text. It is especially useful for converting between various encodings and for transliterating characters.
  • Posted on
    Featured Image
    In the complex expanse of text processing in Linux, sometimes we come across the need to find or manipulate hidden characters that are not visible but can affect the processing of data significantly. Invisible Unicode characters like zero-width spaces can sometimes end up in text files unintentionally through copying and pasting or through web content. This blog will explain how to detect these using grep with a Perl-compatible regex. Q&A on Matching Invisible Characters with grep -P A1: grep -P enables the Perl-compatible regular expression (PCRE) functionality in grep, providing a powerful tool for pattern matching. This mode supports advanced regex features not available in standard grep.
  • Posted on
    Featured Image
    In the Unix-like operating systems, awk is a powerful text processing tool, commonly used to manipulate data and generate reports. One lesser-known feature of awk is its ability to control the traversal order of arrays using the PROCINFO["sorted_in"] array. This blog post delves into how to utilize this feature, enhancing your awk scripts' flexibility and efficiency. A1: awk is a scripting language used for manipulating data and generating reports. It's particularly strong in pattern scanning and processing. awk operations are based on the pattern-action model, where you specify conditions to test each line of data and actions to perform when conditions are met.
  • Posted on
    Featured Image
    A: To accomplish this in Bash using sed, you can use a combination of commands and control structures to precisely target and modify all but the specific (Nth) occurrence of a pattern. The task combines basic sed operations with some scripting logic to specify which instances to replace. Step-by-step Guide: Identify the Pattern: Determine the pattern that you wish to find and replace. Skip the Nth Occurrence: We achieve this by using a combination of commands that keeps track of how many times the pattern has been matched and skips the replacement on the Nth match. Use sed Command: The sed command is employed to perform text manipulation.
  • Posted on
    Featured Image
    In the world of scripting and programming, handling JSON data efficiently can be crucial. For those working with Bash, the jq tool offers a powerful solution for manipulating and parsing JSON, especially when dealing with complex, nested structures. In this blog post, we will explore how to use jq to parse nested JSON without the hassle of splitting on whitespace, preserving the integrity of the data. Q1: What is jq and why is it useful in Bash scripting? A1: jq is a lightweight, flexible, and command-line JSON processor. It is highly useful in Bash scripting because it allows users to slice, filter, map, and transform structured data with a very clear syntax.
  • Posted on
    Featured Image
    Linux provides a powerful toolkit for text processing, one of which is the grep command. This command is commonly used to search for patterns specified by a user. Today, we'll explore an interesting feature of grep - using the -z option to work with NUL-separated "lines." Answer: The grep -z command allows grep to treat input as a set of lines, each terminated by a zero byte (the ASCII NUL character) instead of a newline character. This is particularly useful in dealing with filenames, since filenames can contain newlines and other special characters which might be misinterpreted in standard text processing.
  • Posted on
    Featured Image
    When it comes to optimizing scripts or simply understanding their behavior better, performance profiling is an indispensable tool. In the realm of Linux, perf stat is a powerful utility that helps developers profile applications down to the system call level. Here, we explore how to use perf stat to gain insights into the syscall and CPU usage of Bash scripts. Q1: What is perf stat and what can it do for profiling Bash scripts? A1: perf stat is a performance analyzing tool in Linux, which is part of the broader perf suite of tools. It provides a wide array of performance data, such as CPU cycles, cache hits, and system calls.
  • Posted on
    Featured Image
    In the world of Linux system administration and monitoring, understanding the network usage of individual processes is crucial for performance tuning, security checks, and diagnostics. Although Linux provides a variety of tools for network monitoring, combining the capabilities of /proc/$PID/fd and ss offers a specific and powerful method to get per-process network usage details. A1: The /proc filesystem is a special filesystem in UNIX-like operating systems that presents information about processes and other system information in a hierarchical file-like structure. It is a virtual filesystem that doesn't exist on disk. Instead, it is dynamically created by the Linux kernel.
  • Posted on
    Featured Image
    tmux is an indispensable tool for many developers and system administrators, providing powerful terminal multiplexing capabilities that make multitasking in a terminal environment both efficient and straightforward. One common challenge, however, can be dealing with detached sessions, especially when automating tasks. In this blog post, we'll explore how to programmatically recover a detached tmux session using a script, simplifying the process and enhancing your workflow. Q1: What is a tmux session, and what does it mean for a session to be detached? A1: A tmux session is a collection of virtual windows and panes within a terminal, allowing users to run multiple applications side-by-side and manage multiple tasks.
  • Posted on
    Featured Image
    In the world of Linux, understanding how to control processes effectively is fundamental for system administration and scripting. Today, we'll explore the use of the timeout command to manage processes by implementing a grace period with SIGTERM before escalating to SIGKILL. A1: The timeout command in Linux is used to run a specified command and terminate it if it hasn't finished within a given time limit. This tool is particularly useful for managing scripts or commands that might hang or require too long to execute, potentially consuming unnecessary resources. Q2: What are SIGTERM and SIGKILL signals? A2: In Linux, SIGTERM (signal 15) and SIGKILL (signal 9) are used to terminate processes.
  • Posted on
    Featured Image
    Introduction to LD_PRELOAD in Linux In Linux, LD_PRELOAD is an environment variable used to load specific libraries before any other when a program is run. This can be used to alter the behavior of existing programs without changing their source code by injecting your own custom functions. However, there might be scenarios when you want to set LD_PRELOAD temporarily without altering the environment or affecting other running applications. This Q&A guide covers the essentials of achieving this. Q1: What does LD_PRELOAD do? A: LD_PRELOAD specifies one or more shared libraries that a program should load before any other when it runs.
  • Posted on
    Featured Image
    In high-performance computing environments or in scenarios where real-time processing is crucial, any delay—even milliseconds—can be costly. Linux provides mechanisms for fine-tuning how memory is managed, and one of these mechanisms involves ensuring that specific processes do not swap their memory to disk. Here's a detailed look at how this can be achieved using mlockall via a Linux bash script. Q: Can you explain what mlockall is and why it might be used in a script? A: mlockall is a system call in Linux that allows a process to lock all of its current and future memory pages so that they cannot be swapped to disk.
  • Posted on
    Featured Image
    In the realm of computing, especially in environments where multiple processes or instances need to access and modify the same resources concurrently, mutual exclusion (mutex) is crucial to prevent conflicts and preserve data integrity. This article explains how to implement a mutex across distributed systems using the flock command in Linux Bash, particularly when the systems share files over Network File System (NFS). Q&A on Implementing Mutex with flock over NFS Q: What is flock and how is it used in Linux? A: flock is a command-line utility in Linux used to manage locks from shell scripts or command line.
  • Posted on
    Featured Image
    In this article, we'll explore the use of systemd-run --scope --user to launch processes within a new control group (cgroup) on Linux systems, utilizing systemd's management capabilities to handle resource limitations and dependencies. This approach provides a flexible and powerful way to manage system resources at the granularity of individual processes or groups of processes. Q1: What is a cgroup? A: A cgroup, or control group, is a feature of the Linux kernel that allows you to allocate resources—such as CPU time, system memory, network bandwidth, or combinations of these resources—among user-defined groups of tasks (processes).