[{"content":"Over the Christmas holiday, I migrated the classic UNIX tree utility to Zig, creating a modernized fork called bo. The original tree project has been around since the 90s, displaying directory structures in a hierarchical format. While the C codebase is elegant, it carries decades of legacy baggage. Obsolete platform support, ancient Makefile, and preprocessing macros for long-dead operating systems.\nThe migration resulted in a lot of deleting which is a lot of fun and I replaced the Makefile with Zig\u0026rsquo;s build system, removed support for OS/2 and proprietary HP systems, deleted the archaic .lsm metadata format, and embedded the man page directly into the binary. This post walks through the technical decisions and trade-offs of modernizing a 30-year-old C project.\nWhy Zig? I wanted to work with systems code without writing C (there is nothing wrong with C, I just prefer only reading it) and without dealing with C++\u0026rsquo;s \u0026ldquo;dogwater\u0026rdquo; tooling/stdlib or fighting Rust\u0026rsquo;s concept fatigue. Zig\u0026rsquo;s killer feature is seamless C interop without FFI bindings, the compiler itself is a cross-platform C compiler with extensive target support. This makes it ideal for wrapping existing C code while gradually porting pieces to Zig\u0026rsquo;s more modern syntax. However, one could argue that seamless C interop is also its biggest weakness, as it requires a deep understanding of C before you can rewrite it in Zig. Plus, market forces aren\u0026rsquo;t on Zig\u0026rsquo;s side at this pre-v1.0 stage. Nonetheless, I find it quite pleasing to work with. In fact, I managed to burn out my two-year-old laptop\u0026rsquo;s Intel CPU while compiling Zig from source in March 2025!\nArchaeology: Going Back in Time Before diving into the build system, I needed to understand what I was inheriting. The original tree repository contained some interesting things:\ntree.lsm: A Linux Software Map file from the 1990s. Between 1994-2000, developers manually uploaded these metadata files to curated databases hosted on static web pages. The LSM project shut down in the early 2000s. This file served no purpose in 2026.\ntree.1: A man page written in groff syntax, cryptic to read and edit. I eventually embedded this directly into the binary so users can run bo man without needing the system man command installed.\nMakefile: Surprisingly readable until I noticed the platform targets: HP/UX, HP NonStop, and OS/2. The first two are proprietary HP systems that never ran open source software anyway. OS/2 was IBM\u0026rsquo;s post-DOS operating system, long extinct, but it had infected the codebase with __EMX__ preprocessor blocks throughout the C files.\nZig doesn\u0026rsquo;t support these platforms (LLVM doesn\u0026rsquo;t either). Breaking compatibility with OS/2 meant I could delete hundreds of lines of conditional compilation. The trade-off was obvious, lose support for \u0026lt;0.1% of theoretical users, gain a clean codebase and modern cross-compilation for platforms people actually use.\nFrom Makefile to build.zig With the archaeological survey complete, it was time to replace the Makefile with Zig\u0026rsquo;s build system. The goal was straightforward. Compile the existing C source files, link them with a Zig entrypoint, and support cross-compilation without platform-specific Makefile commands.\nconst std = @import(\u0026#34;std\u0026#34;); pub fn build(b: *std.Build) void { const target = b.standardTargetOptions(.{}); const optimize = b.standardOptimizeOption(.{}); const exe = createExecutable(b, target, optimize); b.installArtifact(exe); makeRunStep(b, exe); } ... The build function is minimal: configure target and optimization, create the executable, and add a run step for convenience.\nAdding C Source Files The meat of the build is in createExecutable, where I compile the original C source files and link them with a Zig entrypoint at src/main.zig. Since I\u0026rsquo;m wrapping C code, I need to explicitly link libc, Zig programs typically don\u0026rsquo;t need this, but if you have C dependencies, it will require it.\nfn createExecutable(b: *std.Build, target: std.Build.ResolvedTarget, optimize: std.builtin.OptimizeMode) *std.Build.Step.Compile { const common_sources = [_][]const u8{ \u0026#34;tree.c\u0026#34;, \u0026#34;list.c\u0026#34;, \u0026#34;hash.c\u0026#34;, \u0026#34;color.c\u0026#34;, \u0026#34;file.c\u0026#34;, \u0026#34;filter.c\u0026#34;, \u0026#34;info.c\u0026#34;, \u0026#34;unix.c\u0026#34;, \u0026#34;xml.c\u0026#34;, \u0026#34;json.c\u0026#34;, \u0026#34;html.c\u0026#34;, }; var sources_buf: [12][]const u8 = undefined; var num_sources: usize = 0; for (common_sources) |src| { sources_buf[num_sources] = src; num_sources += 1; } // Conditionally include strverscmp.c // Only include strverscmp.c if not Linux or if Android const needs_strverscmp = target.result.os.tag != .linux or target.result.abi == .android; if (needs_strverscmp) { sources_buf[num_sources] = \u0026#34;strverscmp.c\u0026#34;; num_sources += 1; } const exe = b.addExecutable(.{ .name = \u0026#34;bo\u0026#34;, .root_module = b.createModule(.{ .root_source_file = b.path(\u0026#34;src/main.zig\u0026#34;), .target = target, .optimize = optimize, }), }); const sources = sources_buf[0..num_sources]; const cflags = \u0026amp;[_][]const u8{ \u0026#34;-std=c11\u0026#34;, \u0026#34;-Wpedantic\u0026#34;, \u0026#34;-Wall\u0026#34;, \u0026#34;-Wextra\u0026#34;, \u0026#34;-Wstrict-prototypes\u0026#34;, \u0026#34;-Wshadow\u0026#34;, \u0026#34;-Wconversion\u0026#34;, }; exe.addCSourceFiles(.{ .files = sources, .flags = cflags, }); addPreprocessorDefines(exe, target); exe.linkLibC(); return exe; } ... Notice the conditional compilation at build.zig strverscmp.c is only included for Android targets. This is because Android\u0026rsquo;s bionic libc doesn\u0026rsquo;t provide strverscmp, unlike glibc on standard Linux systems. This workaround is temporary and inherited from original codebase, I plan to port strverscmp to pure Zig to eliminate platform conditionals entirely.\nC Preprocessor Macros Zig\u0026rsquo;s build system lets me define C preprocessor macros programmatically rather than scattering them across Makefiles. The addPreprocessorDefines function handles platform-specific C macros. The run step makes development ergonomic, zig build run -- -L 2 runs the binary with arguments directly rather than the cumbersome zig build \u0026amp;\u0026amp; ./zig-out/bin/bo -L 2.\nfn addPreprocessorDefines(exe: *std.Build.Step.Compile, target: std.Build.ResolvedTarget) void { // Universal defines for large file support exe.root_module.addCMacro(\u0026#34;LARGEFILE_SOURCE\u0026#34;, \u0026#34;\u0026#34;); exe.root_module.addCMacro(\u0026#34;_FILE_OFFSET_BITS\u0026#34;, \u0026#34;64\u0026#34;); const os_tag = target.result.os.tag; switch (os_tag) { .linux =\u0026gt; { exe.root_module.addCMacro(\u0026#34;_GNU_SOURCE\u0026#34;, \u0026#34;\u0026#34;); }, .solaris, .illumos =\u0026gt; { exe.root_module.addCMacro(\u0026#34;_XOPEN_SOURCE\u0026#34;, \u0026#34;500\u0026#34;); exe.root_module.addCMacro(\u0026#34;_POSIX_C_SOURCE\u0026#34;, \u0026#34;200112\u0026#34;); }, else =\u0026gt; {}, } if (target.result.abi == .android) { exe.root_module.addCMacro(\u0026#34;_LARGEFILE64_SOURCE\u0026#34;, \u0026#34;\u0026#34;); } } fn makeRunStep(b: *std.Build, exe: *std.Build.Step.Compile) void { const run_cmd = b.addRunArtifact(exe); run_cmd.step.dependOn(b.getInstallStep()); // Allow passing arguments if (b.args) |args| { run_cmd.addArgs(args); } const run_step = b.step(\u0026#34;run\u0026#34;, \u0026#34;Run the tree command\u0026#34;); run_step.dependOn(\u0026amp;run_cmd.step); } Cross-compilation without pain: Want a macOS ARM binary? zig build -Dtarget=aarch64-macos. Linux on ARM? zig build -Dtarget=aarch64-linux-gnu. No need to install cross-toolchains or configure separate build environments. It is much easier to produce binaries from 1 Linux machine for all major platforms.\nEmbedded man page: I added Python scripts to convert the original man page into a Zig string constant. Now bo man displays help without requiring the system man command. This is particularly useful in minimal environments like containers.\nOne build system for them all: The original Makefile had platform-specific rules for HP/UX, Solaris, and others. Now build.zig handles everything declaratively based on the target triple.\nFinal The full migration isn\u0026rsquo;t complete, there are still C things I\u0026rsquo;d like to eliminate:\nPort strverscmp to Zig: Currently, I conditionally compile strverscmp.c for Android because bionic libc lacks it (see the Android bionic source). Porting this function to pure Zig would remove platform-specific compilation entirely.\nNative Windows support: The C codebase still relies on POSIX APIs that don\u0026rsquo;t work on Windows. If I port these sections to Zig\u0026rsquo;s cross-platform standard library, Windows becomes a first-class target without conditional compilation hacks.\nJSON parsing: Use Zig\u0026rsquo;s json parser instead of handrolled json.c\nModernizing legacy C projects is satisfying when you pick your battles carefully. Dropping OS/2 support was trivial, nobody cares about IBM\u0026rsquo;s 1990s operating system. The payoff was deleting hundreds of lines of preprocessor cruft.\nZig\u0026rsquo;s C interop makes incremental migration practical. I didn\u0026rsquo;t need to rewrite everything upfront, wrap the C in a build.zig, add a Zig entrypoint, then port pieces over time as needed.\nThe tree → bo migration took a weekend, resulted in less code, and now cross-compiles to dozens of targets from a single command. Not bad for a holiday project.\n","permalink":"http://occamist.dev/posts/going-back-in-time-to-reinvent-the-tree-with-zig/","summary":"\u003cp\u003eOver the Christmas holiday, I migrated the classic UNIX \u003ccode\u003etree\u003c/code\u003e utility to Zig, creating a modernized fork called \u003ca href=\"https://github.com/occamist/bo\"\u003e\u003ccode\u003ebo\u003c/code\u003e\u003c/a\u003e. The original tree project has been around since the 90s, displaying directory structures in a hierarchical format. While the C codebase is elegant, it carries decades of legacy baggage. Obsolete platform support, ancient Makefile, and preprocessing macros for long-dead operating systems.\u003c/p\u003e\n\u003cp\u003eThe migration resulted in a lot of deleting which is a lot of fun and I replaced the Makefile with Zig\u0026rsquo;s build system, removed support for OS/2 and proprietary HP systems, deleted the archaic \u003ccode\u003e.lsm\u003c/code\u003e metadata format, and embedded the man page directly into the binary. This post walks through the technical decisions and trade-offs of modernizing a 30-year-old C project.\u003c/p\u003e","title":"Going Back in Time to Reinvent the Tree with Zig"},{"content":"Recently, I\u0026rsquo;ve been working on simplifying the Laverna CLI integration with Anki. What seemed challenging during the planning phase turned out to be elegant in execution. Here\u0026rsquo;s a summary of the challenges and takeaways.\nThe Original Workflow Three months ago, I built an integration between Laverna CLI and Anki with this workflow:\nUser downloads a custom Cloze note type (note-type.apkg) from the repository and then imports to Anki (one-time setup) User prepares CSV data in the expected format User runs Laverna CLI, which outputs enriched CSV User manually imports the CSV via Anki\u0026rsquo;s deck import tab Everything looked normal on the surface, but steps 1 and 4 were cumbersome and repetitive. Since Anki doesn\u0026rsquo;t provide an official SDK or REST API, I needed to find another approach. Step 1 and 4 were also more error prone since it contained my internal app logic.\nDiscovery While researching, I discovered Anki Connect which is an addon written as a single __init__.py file. Anki also has documentation on writing addons. It was quite readable way to interface with Anki.\nAdding Python to a 100% pure Go repository felt strange, but I decided to build a proof of concept since there was no other way to do it. I have tried sqlite3 reverse engineering of Anki\u0026rsquo;s DB but it was strongly disencouraged way.\nThe constraint was clear: I had to use Anki\u0026rsquo;s bundled Python dependencies, which vary by Anki version. Looking at Anki Connect\u0026rsquo;s codebase, I noticed it used only standard library and no external dependencies which was very outdated but safe approach, it basically had to re-invent HTTPServer and HTTPClient via Unix sockets.\nFortunately, flask, waitress, request, and jsonschema were already available in Anki\u0026rsquo;s dependencies. I chose Flask (for HTTP abstractions) and Waitress (for the WSGI server) since I needed an endpoint to receive enriched CSV data and trigger Anki\u0026rsquo;s import functionality.\nOne another constraint was we could only run the addon code after Anki application started running. This meant that development workflow would need some sort of copy/paste or symlinking workflow which was not pretty but doable. Basically Anki addons were zipped __init__.py files which relied on Anki library and its dependencies and rarely vendored addon dependencies with no solid dependency hashes.\nThe Problem Everything seemed fine until I hit an SQLite error:\n# __init__.py from flask import Flask, jsonify from waitress import serve import threading from aqt import mw app = Flask(__name__) @app.route(\u0026#39;/\u0026#39;) def hello(): note_count = mw.col.note_count() # RuntimeError: Cannot access collection from a background thread return jsonify({\u0026#39;message\u0026#39;: \u0026#39;hello\u0026#39;, \u0026#39;notes\u0026#39;: note_count}) def start_server(): serve(app, host=\u0026#39;127.0.0.1\u0026#39;, port=5000) thread = threading.Thread(target=start_server, daemon=True) thread.start() The issue: Anki\u0026rsquo;s SQLite driver connection isn\u0026rsquo;t thread-safe. The collection object (mw.col) can only be accessed from Qt\u0026rsquo;s main thread. The documentation mentions this. Additionally, __init__.py can not block so we run HTTP server in deamon mode :)\nThe Solution The solution was to use mw.taskman.run_on_main() to run queries on the main thread. But how do I collect the result and return it to my HTTP handler?\nCheck out Python\u0026rsquo;s Future a concept:\nfrom concurrent.futures import Future @app.route(\u0026#39;/\u0026#39;) def hello(): future: Future = Future() def get_count(): try: count = mw.col.note_count() future.set_result(count) except Exception as e: future.set_exception(e) mw.taskman.run_on_main(get_count) try: note_count = future.result() except Exception as e: return jsonify({\u0026#39;error\u0026#39;: str(e)}), 500 return jsonify({\u0026#39;message\u0026#39;: \u0026#39;hello\u0026#39;, \u0026#39;notes\u0026#39;: note_count}), 200 This blocks until future.result() returns and handles exceptions.\nSimplifying Further I\u0026rsquo;m not a fan of try/catch boilerplate. After reading the note_count implementation, I decided to use tuples for error handling:\nfrom concurrent.futures import Future @app.route(\u0026#39;/\u0026#39;) def hello() -\u0026gt; tuple[Response, HTTPStatus]: future: Future = Future() def get_count() -\u0026gt; None: col = mw.col if col is None: return future.set_result((None, \u0026#34;Failed to load collection\u0026#34;)) count = col.note_count() future.set_result((count, None)) mw.taskman.run_on_main(get_count) (res, err) = future.result() if err is not None: return jsonify({\u0026#34;message\u0026#34;: err}), HTTPStatus.INTERNAL_SERVER_ERROR return jsonify({\u0026#39;message\u0026#39;: \u0026#39;hello\u0026#39;, \u0026#39;notes\u0026#39;: res}), HTTPStatus.OK This pattern might not be \u0026ldquo;Pythonic,\u0026rdquo; but it simplified everything beautifully.\nBuilding the Full Logic After the PoC worked, I implemented the complete solution (see PR):\nValidate the incoming request Perform Anki validations: create note type if missing, create deck if missing Read CSV and import via Anki functions (mostly protobuf types) Update Laverna CLI to POST directly to the addon\u0026rsquo;s endpoint instead of writing files Add flags and configuration options The New Workflow Starting with Laverna v0.3.0, the workflow is much smoother:\nUser downloads Laverna Anki Addon from the official addon website (one-time setup) User prepares CSV data User runs Laverna CLI ❯ laverna anki --help NAME: laverna anki - Downloads audios to anki media folder and generates anki CSV file USAGE: laverna anki [options] OPTIONS: --profile string, -p string anki profile name --deck string, -d string anki deck name --endpoint URL, -e URL anki addon endpoint URL (default: \u0026#34;http://localhost:5555/v1/import-csv\u0026#34;) --speed SPEED, -s SPEED specify the SPEED of audios (default: \u0026#34;normal\u0026#34;) --voice VOICE, -v VOICE specify the VOICE of audios --shuffle shuffles the text choices per row (default: true) --strip-csv-header strips the csv header from the generated anki CSV file (default: true) --stdout prints the generated anki CSV file to stdout (default: false) --help, -h show help GLOBAL OPTIONS: --file FILE, -f FILE filepath to prompt FILE --workers int, -w int maximum number of concurrent downloads (default: 16) Example: laverna anki --profile Talha --deck my-viet-deck --voice vi --file ./anki-vi-example.csv Final Happy new year to everyone!\n","permalink":"http://occamist.dev/posts/making-laverna-anki-addon/","summary":"\u003cp\u003eRecently, I\u0026rsquo;ve been working on simplifying the Laverna CLI integration with Anki. What seemed challenging during the planning phase turned out to be elegant in execution. Here\u0026rsquo;s a summary of the challenges and takeaways.\u003c/p\u003e\n\u003ch2 id=\"the-original-workflow\"\u003eThe Original Workflow\u003c/h2\u003e\n\u003cp\u003eThree months ago, I built an integration between Laverna CLI and Anki with this workflow:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eUser downloads a custom Cloze note type (\u003ccode\u003enote-type.apkg\u003c/code\u003e) from the repository and then imports to Anki (one-time setup)\u003c/li\u003e\n\u003cli\u003eUser prepares CSV data in the expected format\u003c/li\u003e\n\u003cli\u003eUser runs Laverna CLI, which outputs enriched CSV\u003c/li\u003e\n\u003cli\u003eUser manually imports the CSV via Anki\u0026rsquo;s deck import tab\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eEverything looked normal on the surface, but steps 1 and 4 were cumbersome and repetitive. Since Anki doesn\u0026rsquo;t provide an official SDK or REST API, I needed to find another approach. Step 1 and 4 were also more error prone since it contained my internal app logic.\u003c/p\u003e","title":"Making Laverna Anki Addon"},{"content":"In this post, I announce an overhaul for Laverna CLI and new version which supports the new subcommands such as run and anki.\nThe Recap In this blog post I have introduced Laverna CLI which was a language learning tool designed to swallow Google\u0026rsquo;s speech API. It was immensely useful but it lacked some features such as Anki integration.\nThe New Anki Integration Starting with v0.1.0, users can use the new anki command to create anki decks with laverna CLI.\nNow with the most amazing Go project urfave/cli, it was super easy to support subcommands and flags of subcommands. I was happy to dodge spf13/cobra\u0026rsquo;s complexity.\nAnd the previous default laverna command is now same as laverna run to keep the sweet backward compatibility. Plus we get shell completions for bash/zsh/fish as well. Make sure to check it out here\n❯ laverna --help NAME: laverna - A new cli application USAGE: laverna [global options] [command [command options]] DESCRIPTION: Download Google Translate audios as mp3 files COMMANDS: run Downloads audios anki Downloads audios to anki media folder and generates anki CSV file help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --file FILE, -f FILE filepath to prompt FILE --workers int, -w int maximum number of concurrent downloads (default: 16) --help, -h show help ❯ laverna anki --help NAME: laverna anki - Downloads audios to anki media folder and generates anki CSV file USAGE: laverna anki [options] OPTIONS: --profile string, -p string anki profile name --speed normal specify the speed of audios, must be one of these values: `normal`, `slow`, `slowest` (default: \u0026#34;normal\u0026#34;) --voice string specify the voice of audios --shuffle, -s shuffles A,B,C,D choices per row (default: true) --strip-csv-header, --strip strips csv header from the generated anki CSV file (default: true) --help, -h show help GLOBAL OPTIONS: --file FILE, -f FILE filepath to prompt FILE --workers int, -w int maximum number of concurrent downloads (default: 16) For the new anki command, you need to provide a CSV file, a voice name and your anki profile name. Profile name is pretty much for determining your Anki media folder so that downloaded audios get stored there. Voice name is Google\u0026rsquo;s language ISO code for the specific voices. The CSV file is where all the things are defined.\nYour CSV file should look like below.\nText,HelperText,TextA,TextB,TextC,TextD ฉันชอบ{{c1::ฟัง}}เพลง,I like to listen to music,ฟัง,เล่น,ดู,อ่าน It must have the text that uses \u0026ldquo;{{c1:ANSWERWORD}}\u0026rdquo; for the cloze of note types. It must specify the helper text which is a sentence for the reader to translate. It must have 4 text choices to guess the answer word. Then we can run laverna anki --profile Talha --voice th --file thai.csv and it will output our actual CSV deck to be imported into Anki, by the time this command runs we get all the audios.\nThe below CSV will be called \u0026ldquo;Athai.csv\u0026rdquo;, \u0026ldquo;A\u0026rdquo; postfix indicates the audio filenames that were created in media folder of Anki. It is a reference to the unique audio files.\nฉันชอบ{{c1::ฟัง}}เพลง,I like to listen to music,ฟัง,เล่น,ดู,อ่าน,[sound:a.mp3],[sound:b.mp3],[sound:c.mp3],[sound:d.mp3],[sound:e.mp3] Finally, if you have imported the Cloze Multi Choice Audio note type into Anki, you can go ahead and import the CSV in File \u0026gt; Import \u0026gt; Select CSV then choose Cloze Multi Choice Audio note type and pick delimeter as comma.\nFinal words The best thing about this approach is you can sync whole media with Anki Sync so that you don\u0026rsquo;t need to manage storage in your devices manually. And this is very exciting if you have Anki mobile app on your phone, you can really make the most of your time with learning languages.\nThe CSV inputs can be generally generated via Gemini, in the future I plan to wrap Gemini API inside the Laverna CLI but there is no certain roadmap as I am testing it for now. I am currently testing Gemini\u0026rsquo;s Vietnamese word generation, example CSV can be found here\n","permalink":"http://occamist.dev/posts/anki-feature-for-laverna-cli/","summary":"\u003cp\u003eIn this post, I announce an overhaul for Laverna CLI and new version which supports the new subcommands such as \u003ccode\u003erun\u003c/code\u003e and \u003ccode\u003eanki\u003c/code\u003e.\u003c/p\u003e\n\u003ch2 id=\"the-recap\"\u003eThe Recap\u003c/h2\u003e\n\u003cp\u003eIn this blog \u003ca href=\"https://occamist.dev/posts/a-christmas-gift-for-language-learners\"\u003epost\u003c/a\u003e I have introduced Laverna CLI which was a language learning tool designed to swallow Google\u0026rsquo;s speech API. It was immensely useful but it lacked some features such as Anki integration.\u003c/p\u003e\n\u003ch2 id=\"the-new-anki-integration\"\u003eThe New Anki Integration\u003c/h2\u003e\n\u003cp\u003eStarting with \u003ccode\u003ev0.1.0\u003c/code\u003e, users can use the new \u003ccode\u003eanki\u003c/code\u003e command to create anki decks with laverna CLI.\u003c/p\u003e","title":"Anki Feature For Laverna CLI"},{"content":"In this post, I will be announcing the new features and the existing struggles of my keyboard app.\nThe Recap Let\u0026rsquo;s recap about \u0026ldquo;what does this app solve?\u0026rdquo; Onscreen keyboard app that uses English (UK/US) keyboard inputs to map to various languages.\nVideo not supported The New Features New languages support such as German, Italian, Lao, Vietnamese. And a bug fix regarding language specific fonts, this adjusts the natural font based on language choice. I was quite fascinated how easy it was to add Vietnamese. Lao required me to evaluate font choices but I liked how similar feel was to Thai Abugida. The Abugida composition was beautifully handled by the font.\nThe Struggles The languages that made me struggle the most are Greek, Tibetan, and Korean. Making the total supported languages count 6.\nGreek characters were straightforward; they didn\u0026rsquo;t require additional fonts. However, there is a specific diaeresis/dialytika mark that launches a different layout. I couldn\u0026rsquo;t find a solution to launch a third layout since my program supports a maximum of two layouts per language.\nTibetan requires a specific font like Lao and Thai, which contributes to the growing bundle size. The size is negligible; however, Tibetan has more than four layouts, which requires an algorithm change to all layouts to perform \u0026ldquo;shift\u0026rdquo; key functionality on different keys. This breaks the two-layouts-per-language rule in my app. But I have seen these patterns in languages like Greek, so I may need to address this in the near future to support more languages. Furthermore, Tibetan composition is completely handled by the font so I am lucky with the embedded fonts.\nKorean comes with its Hangul composition complexity. I initially thought Hangul composition was handled by the fonts like in Thai, Lao, or other Indic Abugidas. It turns out Hangul composition is handled by the IME (Input Method Editor of the specific OS), and Tauri seems to be missing that layer. Re-writing Hangul composition is relatively hard and requires derivations of existing methods in my project, and I am no expert in that area. There were third-party libraries such as khangul; however, it requires creating another input state in its context, which requires me to juggle two different input states between language choices which is not cool for maintainability of my project.\nThe interesting part I discovered was implementing NOOP buttons for Korean. It seems that Korean Hangul is really running out of characters when the \u0026ldquo;shift\u0026rdquo; key is used. They have no CAPS LOCK and fewer SHIFT variant keys, so some keys had to implement NOOP. This was quite a different experience. Overall, I expected Korean fonts to resolve the need for re-writing Hangul composition, but they didn\u0026rsquo;t, so I am disappointed with the way Korean (Hangul) characters work in composition. To add salt to the injury, undoing characters is extremely complex with my own Hangul composition API or third-party Hangul composition API.\nFinal Words Checkout the releases for v0.2.0\nIf you are Arch Linux user, Pauron bot got you covered here. Please upvote it! ;)\nFound a bug? Spotted an issue? Have a brilliant idea to share? Don\u0026rsquo;t hesitate to give me a nudge!\n","permalink":"http://occamist.dev/posts/new-release-keyboard-app-v0.2.0/","summary":"\u003cp\u003eIn this post, I will be announcing the new features and the existing struggles of my keyboard app.\u003c/p\u003e\n\u003ch2 id=\"the-recap\"\u003eThe Recap\u003c/h2\u003e\n\u003cp\u003eLet\u0026rsquo;s recap about \u0026ldquo;what does this app solve?\u0026rdquo; Onscreen keyboard app that uses English (UK/US) keyboard inputs to map to various languages.\u003c/p\u003e\n\u003cvideo style=\"max-width: 100%; height: auto;\" controls\u003e\n  \u003csource src=\"/typing-thai.webm\" type=\"video/webm\"\u003e\n  Video not supported\n\u003c/video\u003e\n\u003ch2 id=\"the-new-features\"\u003eThe New Features\u003c/h2\u003e\n\u003cp\u003eNew languages support such as German, Italian, Lao, Vietnamese. And a bug fix regarding language specific fonts, this adjusts the natural font based on language choice. I was quite fascinated how easy it was to add Vietnamese. Lao required me to evaluate font choices but I liked how similar feel was to Thai Abugida. The Abugida composition was beautifully handled by the font.\u003c/p\u003e","title":"New Release: Keyboard App v0.2.0"},{"content":"In this post, I announce my newest project: a cross-platform on-screen keyboard that supersedes my previous virtual-keyboard work. Built with Tauri, it\u0026rsquo;s faster, smaller, and works beautifully across Windows, macOS, and Linux.\nWhat does it solve? This is a cross-platform on-screen keyboard for different languages. English (UK/US) keyboard inputs map to various languages. It\u0026rsquo;s a complete rewrite of my previous virtual-keyboard project, built from the ground up with modern alternative.\nVideo not supported Why Tauri over GTK? Having worked with Python GTK bindings before, the difference is night and day. Here\u0026rsquo;s why Tauri wins:\nBundle Size: Python GTK apps are ultra large, Tauri apps are just a few megabytes instead of 100MB+ monsters.\nCross-Platform Consistency: Python GTK behaves differently across platforms and harder to rebuild across platforms. Tauri gives you the same smooth experience everywhere and makes it super easy to distribute across Linux/Windows/Macos.\nDeveloper Experience: Modern frontend tools with minimal config = pure joy. Hot reloading, excellent tooling, easy logo customization, built-in asset bundlers and very easy to style the look.\nCloser to Native Feel: Uses your system\u0026rsquo;s webkit/webview instead of bundling an entire browser. Your app feels like it belongs on the platform plus it supports system tray icons and menu actions.\nGetting Started You can build locally:\ngit clone https://github.com/occamist/keyboard-app pnpm install pnpm tauri build Or I strongly recommend to download Tauri-made binaries directly from releases.\nArch Linux users get special treatment, there\u0026rsquo;s an AUR package waiting for you. Please upvote it! ;)\nFinal Words สุขสันต์วันภาษาไทย\nJuly 29th marked the Thai Language Day, Happy Thai Language Day!\nProjects like this play a small role in supporting linguistic diversity by removing barriers to digital communication. Every language deserves easy access in our modern computing environments.\nFound a bug? Spotted an issue? Have a brilliant idea to share? Don\u0026rsquo;t hesitate to give me a nudge!\n","permalink":"http://occamist.dev/posts/building-cross-platform-magic-from-gtk-to-tauri/","summary":"\u003cp\u003eIn this post, I announce my newest project: a cross-platform on-screen keyboard that supersedes my previous virtual-keyboard work. Built with Tauri, it\u0026rsquo;s faster, smaller, and works beautifully across Windows, macOS, and Linux.\u003c/p\u003e\n\u003ch2 id=\"what-does-it-solve\"\u003eWhat does it solve?\u003c/h2\u003e\n\u003cp\u003eThis is a cross-platform on-screen keyboard for different languages. English (UK/US) keyboard inputs map to various languages. It\u0026rsquo;s a complete rewrite of my previous virtual-keyboard project, built from the ground up with modern alternative.\u003c/p\u003e","title":"Building Cross-Platform Magic: From GTK to Tauri"},{"content":"I have been quiet for a few months, the reason being I had an opportunity to develop new interesting things and I have been experimenting with some other technologies while trying to find the best use cases.\nMeanwhile, I received a comment about a package that I have been maintaining for Arch Linux and I had no available time to respond or update my AUR package. If you remember from my previous post Packaging Go for Arch Linux Tutorial, I like maintaining AUR packages but it gets time consuming when you need to track new releases and update version, SHA and commit hashes manually by hand. I had to come up with my own niche solution.\nWhat is Pauron? What does it solve? I dedicated a day and created Pauron, when you have lame problems, you solve them with lame languages. Pauron is essentially a single file program that checks the upstream GitHub URL for your AUR package, if there is no newer version, it will not do anything. If there is a newer version, it will patch required values in PKGBUILD and .SRCINFO then push it to AUR with a new commit.\nThis solves a lot of manual hand tasks such as entering a new version, entering a new SHA hash and entering a new commit hash which is mentioned on my previous post about AUR packages.\nGetting Started Pauron is meant to be run on github actions, but you can do a cron job on anywhere else if you prefer. I have been using YAML file below to periodically run it. If you have more than one package, you can specify it with -p equivelant to --pkg-name like below. You should also setup AUR_SSH_KEY env variable which is your private SSH key for AUR.\nTo try it out, you can fork my repository and adjust your github actions along with AUR_SSH_KEY secret if you have AUR account or interested in AUR package maintenance.\nname: Update AUR packages on: # Manual trigger workflow_dispatch: # Cron trigger on the 22nd of every month at 00:00 UTC schedule: - cron: \u0026#39;0 0 22 * *\u0026#39; jobs: update-k3sup-aur: runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v4 - name: Set up Python uses: actions/setup-python@v5 with: python-version: \u0026#34;3.13\u0026#34; - name: Install dependencies run: | sudo apt update sudo apt install -y python3-pip pip3 install requests - name: Update AUR package env: AUR_SSH_KEY: ${{ secrets.AUR_SSH_KEY }} run: | python main.py -p k3sup Or you can simply use Pauron PyPI package with pipx which is the \u0026ldquo;RECOMMENDED\u0026rdquo; way.\nAUR_SSH_KEY=\u0026#34;$(cat ~/.ssh/pauron)\u0026#34; pipx run pauron -p k3sup The normal output if your package is up to date will look like below:\n\u0026gt; Run python main.py -p k3sup INFO: SSH key fingerprint: 256 SHA256:TwGFdHlbNpteILDQx4/cOXD/PiDNnq2C9B/0h7XsteA pauron@pauron.com (ED25519) # aur.archlinux.org:22 SSH-2.0-OpenSSH_10.0 # aur.archlinux.org:22 SSH-2.0-OpenSSH_10.0 # aur.archlinux.org:22 SSH-2.0-OpenSSH_10.0 # aur.archlinux.org:22 SSH-2.0-OpenSSH_10.0 # aur.archlinux.org:22 SSH-2.0-OpenSSH_10.0 INFO: Added AUR host key to known_hosts INFO: SSH test completed with exit code: 1 INFO: SSH stderr: Welcome to AUR, occamist! Interactive shell is disabled. Try `ssh aur@aur.archlinux.org help` for a list of commands. INFO: Processing AUR package: ssh://aur@aur.archlinux.org/k3sup.git INFO: Cloning ssh://aur@aur.archlinux.org/k3sup.git INFO: Parsing PKGBUILD... INFO: Extracted metadata: INFO: pkgver: 0.13.9 INFO: sha256sums: (\u0026#39;4764f787f55fae4dab9527c5d829fc70a522e1c2b7f7a23cde6df1096fefbc31\u0026#39;) INFO: _commit: (\u0026#39;a1700f64dcffd249890b13cf6d97f4c120a53e08\u0026#39;) INFO: source: (\u0026#34;${pkgname}-${pkgver}.tar.gz::https://github.com/alexellis/k3sup/archive/${pkgver}.tar.gz\u0026#34;) INFO: owner_name: alexellis INFO: repo_name: k3sup INFO: Checking for latest Github release version... INFO: Newest Github version(0.13.9) and current PKGBUILD version(0.13.9) are same, quitting. If the package is not up to date, the output will look like below:\n\u0026gt; Run python main.py -p k3sup INFO: SSH key fingerprint: 256 SHA256:TwGFdHlbNpteILDQx4/cOXD/PiDNnq2C9B/0h7XsteA pauron@pauron.com (ED25519) # aur.archlinux.org:22 SSH-2.0-OpenSSH_10.0 # aur.archlinux.org:22 SSH-2.0-OpenSSH_10.0 # aur.archlinux.org:22 SSH-2.0-OpenSSH_10.0 # aur.archlinux.org:22 SSH-2.0-OpenSSH_10.0 # aur.archlinux.org:22 SSH-2.0-OpenSSH_10.0 INFO: Added AUR host key to known_hosts INFO: SSH test completed with exit code: 1 INFO: SSH stderr: Welcome to AUR, occamist! Interactive shell is disabled. Try `ssh aur@aur.archlinux.org help` for a list of commands. INFO: Processing AUR package: ssh://aur@aur.archlinux.org/k3sup.git INFO: Cloning ssh://aur@aur.archlinux.org/k3sup.git INFO: Parsing PKGBUILD... INFO: Extracted metadata: INFO: pkgver: 0.13.7 INFO: sha256sums: (\u0026#39;b0c15f99aef35f7bb2dda45b08c2acaa7f6289fa8544f64e3fdaa07892a466a1\u0026#39;) INFO: _commit: (\u0026#39;b7bb7cb246eb639629f204c2aca2b446bfb4b244\u0026#39;) INFO: source: (\u0026#34;${pkgname}-${pkgver}.tar.gz::https://github.com/alexellis/k3sup/archive/${pkgver}.tar.gz\u0026#34;) INFO: owner_name: alexellis INFO: repo_name: k3sup INFO: Checking for latest Github release version... INFO: SHA256 hash for release tag(0.13.9): 4764f787f55fae4dab9527c5d829fc70a522e1c2b7f7a23cde6df1096fefbc31 INFO: Commit hash for release tag(0.13.9): a1700f64dcffd249890b13cf6d97f4c120a53e08 INFO: PKGBUILD file was updated successfully INFO: .SRCINFO file was updated successfully [master 4e17155] v0.13.9 2 files changed, 6 insertions(+), 6 deletions(-) To ssh://aur.archlinux.org/k3sup.git 78947de..4e17155 master -\u0026gt; master INFO: Successfully committed and pushed 0.13.9 Quick note: the upstream source URL needs to be GitHub since I use GitHub API to determine latest release versions but feel free to PR or make an issue about any other version control systems such as GitLab if you think it is useful to have!\n","permalink":"http://occamist.dev/posts/pauron-new-automation-bot-for-aur/","summary":"\u003cp\u003eI have been quiet for a few months, the reason being I had an opportunity to develop new interesting things and I have been experimenting with some other technologies while trying to find the best use cases.\u003c/p\u003e\n\u003cp\u003eMeanwhile, I received a comment about a package that I have been maintaining for Arch Linux and I had no available time to respond or update my AUR package. If you remember from my previous post \u003ca href=\"https://occamist.dev/posts/packaging-go-for-arch-linux-tutorial\"\u003ePackaging Go for Arch Linux Tutorial\u003c/a\u003e, I like maintaining AUR packages but it gets time consuming when you need to track new releases and update version, SHA and commit hashes manually by hand. I had to come up with my own niche solution.\u003c/p\u003e","title":"Pauron: New Automation Bot for AUR"},{"content":"So I have been dying to write this blog post for a while, I feel great that I had some time to focus on this topic. Initially my primary reason to write this post is I see great negligence from very senior developers when they hear \u0026ldquo;SQL\u0026rdquo; and I came across a few very senior interviewers who said \u0026ldquo;SQL is not even programming language, we use ORMs, why bother learning something that is not even a programming language (they mean SQL)\u0026rdquo; so the main purpose of this post is to shed some light over ignorance/incompetence with the proof of \u0026ldquo;SQL indeed is a programming language\u0026rdquo; and why you must learn and practice it just like other programming languages.\nFirst of all, we need to talk about what makes a language a programming language. It is actually very simple, \u0026ldquo;Turing completeness\u0026rdquo; is what makes a language a programming language. But what is \u0026ldquo;Turing completeness\u0026rdquo;? I am glad you asked, short answer is you need to understand \u0026ldquo;Turing machines\u0026rdquo; before we talk about \u0026ldquo;Turing completeness\u0026rdquo;. What is a \u0026ldquo;Turing machine\u0026rdquo;?\nTuring machine is a theoretical device that was invented by Alan Turing (check on your £50 bank note bills, he is right there!) in order to understand computation. Turing machine has:\nLong tape divided into cells (think as an infinite long strip of paper) A head that can read symbols from long tape then write new symbols then move left/right or stand still The machine state (think possible \u0026ldquo;modes\u0026rdquo; that machine can be in) The transition rules (a set of rules that dictate the flow of current state to new states) Now, back to \u0026ldquo;Turing completeness\u0026rdquo;, a system is Turing complete if it can simulate any Turing machine. This means it can solve any problem that a Turing machine can solve.\nSo if I write how to solve a niche problem with Turing machine simulation in SQL, I will prove that \u0026ldquo;SQL is indeed a programming language\u0026rdquo;. Notice the word that we are saying \u0026ldquo;simulation\u0026rdquo; because a real Turing machine has infinite memory and CPU that never halts, it is a theoretical device which can be implemented in any real programming languages.\nTuring Machine Simulation in SQL First of all, we are going to need our machine. And machine needs to have initial state, accept state, reject state, blank symbol and max steps to halt. Accept state means a positive outcome that leads to halt. Reject state means a negative outcome that leads to halt. The word symbol means what symbol is located at the each cell of the tape.\nCREATE TABLE machine ( initial_state VARCHAR(64) NOT NULL, accept_state VARCHAR(64) NOT NULL, reject_state VARCHAR(64) NOT NULL, blank_symbol VARCHAR(1) NOT NULL DEFAULT \u0026#39;_\u0026#39;, max_steps INTEGER NOT NULL DEFAULT 1000 ); Secondly we need to know transition rules which is the state diagram for any sort of Turing machine. Before we switch from state to new_state, we must read from the tape (read_symbol), we must write to the tape (write_symbol) then finally we can act on the direction whether it is left/right or none.\nCREATE TABLE transition_rules ( state VARCHAR(64) NOT NULL, new_state VARCHAR(64) NOT NULL, read_symbol VARCHAR(1) NOT NULL, write_symbol VARCHAR(1) NOT NULL, move_direction VARCHAR(1) NOT NULL CHECK (move_direction IN (\u0026#39;L\u0026#39;, \u0026#39;R\u0026#39;, \u0026#39;N\u0026#39;)) ); Before we go any further, I will add procedures for machine and transition rules so that in a capsulated procedure (program) which will contain different transition rules for 1 machine, we can use them easily. Machine is always a singleton which needs to be cleaned everytime it is set, Transition rules need to be only cleaned in a capsulated procedure (program).\n-- initialize_machine initializes the Turing machine for a specific program CREATE OR REPLACE PROCEDURE initialize_machine( initial_state VARCHAR(64), accept_state VARCHAR(64), reject_state VARCHAR(64), blank_symbol VARCHAR(1) DEFAULT \u0026#39;_\u0026#39;, max_steps INTEGER DEFAULT 1000 ) AS $$ BEGIN DELETE FROM machine; -- clear INSERT INTO machine VALUES (initial_state, accept_state, reject_state, blank_symbol, max_steps); END; $$ LANGUAGE plpgsql; -- add_transition_rule adds transition rule for a specific program CREATE OR REPLACE PROCEDURE add_transition_rule( state VARCHAR(64), new_state VARCHAR(64), read_symbol VARCHAR(1), write_symbol VARCHAR(1), move_direction VARCHAR(1) ) AS $$ BEGIN INSERT INTO transition_rules VALUES (state, new_state, read_symbol, write_symbol, move_direction); END; $$ LANGUAGE plpgsql; In Turing machines, the standard convention is that the head reads the current symbol, then writes a new symbol, and finally moves. This is known as the \u0026ldquo;read-write-move\u0026rdquo; sequence for each step of the computation.\nNow we will define a function for running steps(iterations) that follows \u0026ldquo;read-write-move\u0026rdquo; algorithm. To help you understand the arguments and return parameters, I\u0026rsquo;ll explain the logic flow.\nTake current state, accept state and reject state If current state is accept state or reject state, we return current state and halt Take the tape (it is long text), take the position of the tape We use pos to determine if we are inside of the tape, we take read the symbol from the tape. If outside of tape, we read the symbol as blank so we need to know what blank symbol is. Remember string indices start from 1 in SQL Query existing transition rule that matches our current state and read symbol from tape If transition rule is not found, return halted as true Write the new symbol, which is indicated in the transition rule, to the tape Increment or decrement the position according to move direction (which is also indicated in the transition rule) Finally, return new state, the modified tape, the new position and the halted status The reason why we use a function over procedure is an obvious one, we need to return halted status and other return parameters since they will be used in a loop to determine the halting point.\n-- run_step executes a single step of machine CREATE OR REPLACE FUNCTION run_step( current_state VARCHAR(64), accept_state VARCHAR(64), reject_state VARCHAR(64), tape TEXT, pos INTEGER, blank VARCHAR(1) ) RETURNS TABLE ( new_state VARCHAR(64), new_tape TEXT, new_pos INTEGER, halted BOOLEAN ) AS $$ DECLARE tape_length INTEGER; symbol VARCHAR(1); rule RECORD; BEGIN -- check if it is a final state IF current_state = accept_state OR current_state = reject_state THEN RETURN QUERY SELECT current_state, tape, pos, TRUE; RETURN; END IF; tape_length := length(tape); -- get the current symbol IF pos \u0026lt; 1 OR pos \u0026gt; tape_length THEN symbol := blank; ELSE symbol := substr(tape, pos, 1); END IF; -- query transition rule SELECT * INTO rule FROM transition_rules tr WHERE tr.state = current_state AND tr.read_symbol = symbol LIMIT 1; IF rule IS NULL THEN -- no rule found, halt RETURN QUERY SELECT current_state, tape, pos, TRUE; RETURN; END IF; IF pos \u0026lt; 1 THEN -- extend tape left tape := rule.write_symbol || tape; pos := 1; ELSIF pos \u0026gt; tape_length THEN -- extend tape right tape := tape || rule.write_symbol; ELSE tape := substr(tape, 1, pos-1) || rule.write_symbol || substr(tape, pos+1); END IF; IF rule.move_direction = \u0026#39;L\u0026#39; THEN pos := pos - 1; ELSIF rule.move_direction = \u0026#39;R\u0026#39; THEN pos := pos + 1; END IF; RETURN QUERY SELECT rule.new_state, tape, pos, FALSE; END; $$ LANGUAGE plpgsql; And now we are going to be define our event loop, it will be a procedure since it doesn\u0026rsquo;t need to return anything, however I will need to debug or show outputs to my fellow readers at some point so I will define a new machine steps table to record every step of my machine for debugging purposes.\nCREATE TABLE machine_steps ( step INTEGER NOT NULL, state VARCHAR(64) NOT NULL, tape TEXT NOT NULL, position INTEGER NOT NULL, halted BOOLEAN NOT NULL DEFAULT FALSE ); Our algorithm for running machine is relatively simple.\nTake tape which is a text as argument Assign initial state, accept state, reject state, blank symbol and max steps from machine Record the first machine step as 0 While it is not halted and step is less than max steps, start the loop In the loop, increment the step, execute one machine step and assign new state, new tape, new position and halted status In the loop, record the machine step byproducts After loop ends, if steps has reached max steps and not halted, set last machine step with timeout state and halted status as halted Fun fact, max steps is defined in our program to overcome the most famous theoritical problem in computer science (aka Halting problem). The Halting Problem asks whether there exists a general algorithm that can determine, for any arbitrary program and input, whether that program will eventually halt or run forever. Turing proved that no such algorithm can exist - it\u0026rsquo;s mathematically impossible to create a procedure that can always correctly predict whether an arbitrary program will halt.\n-- run_machine runs machine CREATE OR REPLACE PROCEDURE run_machine(t TEXT) AS $$ DECLARE tape TEXT := COALESCE(t, \u0026#39;\u0026#39;); state VARCHAR(64); position INTEGER := 1; accept_state VARCHAR(64); reject_state VARCHAR(64); blank_symbol VARCHAR(1); max_steps INTEGER; halted BOOLEAN := FALSE; step INTEGER := 0; BEGIN DELETE FROM machine_steps; -- clear -- read machine state SELECT m.initial_state, m.accept_state, m.reject_state, m.blank_symbol, m.max_steps INTO state, accept_state, reject_state, blank_symbol, max_steps FROM machine m; -- record machine step INSERT INTO machine_steps (step, state, tape, position, halted) VALUES (step, state, tape, position, halted); WHILE NOT halted AND step \u0026lt; max_steps LOOP step := step + 1; -- execute one machine step and directly assign results to main variables SELECT fn.new_state, fn.new_tape, fn.new_pos, fn.halted INTO state, tape, position, halted FROM run_step(state, accept_state, reject_state, tape, position, blank_symbol) fn; -- record one machine step INSERT INTO machine_steps (step, state, tape, position, halted) VALUES (step, state, tape, position, halted); END LOOP; -- check if we timed out IF step = max_steps AND NOT halted THEN UPDATE machine_steps SET state = \u0026#39;TIMEOUT\u0026#39;, halted = TRUE WHERE step = max_steps; END IF; END; $$ LANGUAGE plpgsql; With this our Turing Machine is complete!!! now how do we actually run it? well, you sort of need a state diagram that solves a problem! In the end, if you don\u0026rsquo;t have a problem, why do you need a machine at the first place?\nI have come up with a palindrome recognizer state diagram below, let me explain briefly. It looks a bit complicated at the beginning but when you do the napkin calculation on a given basic input, it makes so much sense.\nstateDiagram-v2 [*] --\u0026gt; q0 q0 --\u0026gt; q1: 0 / _, R q0 --\u0026gt; q2: 1 / _, R q0 --\u0026gt; yes: _ / _, N q1 --\u0026gt; q1: 0 / 0, R q1 --\u0026gt; q1: 1 / 1, R q1 --\u0026gt; q3: _ / _, L q2 --\u0026gt; q2: 0 / 0, R q2 --\u0026gt; q2: 1 / 1, R q2 --\u0026gt; q4: _ / _, L q3 --\u0026gt; q5: 0 / _, L q3 --\u0026gt; no: 1 / 1, N q3 --\u0026gt; yes: _ / _, N q4 --\u0026gt; q5: 1 / _, L q4 --\u0026gt; no: 0 / 0, N q4 --\u0026gt; yes: _ / _, N q5 --\u0026gt; q5: 0 / 0, L q5 --\u0026gt; q5: 1 / 1, L q5 --\u0026gt; q0: _ / _, R yes --\u0026gt; [*]: Accepted no --\u0026gt; [*]: Rejected Since we have our transition rules (state diagram above), we can write our procedure. Below is the palindrome program in SQL procedure which uses Turing machine to solve the palindrome problem.\n-- run_palindrome_program runs the palindrome program in Turing machine CREATE OR REPLACE PROCEDURE run_palindrome_program(t TEXT) AS $$ BEGIN DELETE FROM transition_rules; -- clear CALL initialize_machine(\u0026#39;q0\u0026#39;, \u0026#39;yes\u0026#39;, \u0026#39;no\u0026#39;); -- q0: read left most symbol and move right side CALL add_transition_rule(\u0026#39;q0\u0026#39;, \u0026#39;q1\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;R\u0026#39;); CALL add_transition_rule(\u0026#39;q0\u0026#39;, \u0026#39;q2\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;R\u0026#39;); CALL add_transition_rule(\u0026#39;q0\u0026#39;, \u0026#39;yes\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;N\u0026#39;); -- q1: 0 was at the beginning, now go to the right-most end CALL add_transition_rule(\u0026#39;q1\u0026#39;, \u0026#39;q1\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;R\u0026#39;); CALL add_transition_rule(\u0026#39;q1\u0026#39;, \u0026#39;q1\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;R\u0026#39;); CALL add_transition_rule(\u0026#39;q1\u0026#39;, \u0026#39;q3\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;L\u0026#39;); -- q2: 1 was at the beginning, now go to the right-most end CALL add_transition_rule(\u0026#39;q2\u0026#39;, \u0026#39;q2\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;R\u0026#39;); CALL add_transition_rule(\u0026#39;q2\u0026#39;, \u0026#39;q2\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;R\u0026#39;); CALL add_transition_rule(\u0026#39;q2\u0026#39;, \u0026#39;q4\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;L\u0026#39;); -- q3: check if last symbol matches for 0 at the beginning CALL add_transition_rule(\u0026#39;q3\u0026#39;, \u0026#39;q5\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;L\u0026#39;); CALL add_transition_rule(\u0026#39;q3\u0026#39;, \u0026#39;no\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;N\u0026#39;); CALL add_transition_rule(\u0026#39;q3\u0026#39;, \u0026#39;yes\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;N\u0026#39;); -- q4: check if last symbol matches for 1 at the beginning CALL add_transition_rule(\u0026#39;q4\u0026#39;, \u0026#39;q5\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;L\u0026#39;); CALL add_transition_rule(\u0026#39;q4\u0026#39;, \u0026#39;no\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;N\u0026#39;); CALL add_transition_rule(\u0026#39;q4\u0026#39;, \u0026#39;yes\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;N\u0026#39;); -- q5: now go back to the left-most end CALL add_transition_rule(\u0026#39;q5\u0026#39;, \u0026#39;q5\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;L\u0026#39;); CALL add_transition_rule(\u0026#39;q5\u0026#39;, \u0026#39;q5\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;L\u0026#39;); CALL add_transition_rule(\u0026#39;q5\u0026#39;, \u0026#39;q0\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;_\u0026#39;, \u0026#39;R\u0026#39;); CALL run_machine(t); END; $$ LANGUAGE plpgsql; Walking through the example with the string \u0026ldquo;101\u0026rdquo;:\nAt yes state - The machine halts and accepts the string.\nturing_machine=# call run_palindrome_program(\u0026#39;101\u0026#39;); CALL turing_machine=# select * FROM machine_steps; step | state | tape | position | halted ------+-------+------+----------+-------- 0 | q0 | 101 | 1 | f 1 | q2 | _01 | 2 | f 2 | q2 | _01 | 3 | f 3 | q2 | _01 | 4 | f 4 | q4 | _01_ | 3 | f 5 | q5 | _0__ | 2 | f 6 | q5 | _0__ | 1 | f 7 | q0 | _0__ | 2 | f 8 | q1 | ____ | 3 | f 9 | q3 | ____ | 2 | f 10 | yes | ____ | 2 | f 11 | yes | ____ | 2 | t (12 rows) If you want do try on your own, you can clone my repository. Let\u0026rsquo;s try 1001 for example.\ngit clone https://github.com/occamist/turing-machine-in-sql docker compose up -d --build docker exec -it postgres-turing psql -U turing -d turing_machine turing_machine=# select * FROM machine_steps; turing_machine=# call run_palindrome_program(\u0026#39;1001\u0026#39;); turing_machine=# select * FROM machine_steps; ------+-------+-------+----------+-------- 0 | q0 | 1001 | 1 | f 1 | q2 | _001 | 2 | f 2 | q2 | _001 | 3 | f 3 | q2 | _001 | 4 | f 4 | q2 | _001 | 5 | f 5 | q4 | _001_ | 4 | f 6 | q5 | _00__ | 3 | f 7 | q5 | _00__ | 2 | f 8 | q5 | _00__ | 1 | f 9 | q0 | _00__ | 2 | f 10 | q1 | __0__ | 3 | f 11 | q1 | __0__ | 4 | f 12 | q3 | __0__ | 3 | f 13 | q5 | _____ | 2 | f 14 | q0 | _____ | 3 | f 15 | yes | _____ | 3 | f 16 | yes | _____ | 3 | t You may think that it is not efficient at all, but I want to remind you that this was the fundamental to the whole computing paradigm. We are not focusing so much on efficiency here since everything is tied to a single while tight loop like in videogames. What matters here is if you can replicate the algorithm and write it in a claimed language. It means that claimed language is indeed a programming language.\nFinal Words I want to stress a final point. We have utilized \u0026ldquo;pl/pgsql\u0026rdquo; which is the procedural language of PostgreSQL. During ANSI-SQL (SQL-86/SQL-89), SQL was not turing complete at that time because it lacked recursive structures such as loops. SQL-99 added \u0026ldquo;WITH RECURSIVE\u0026rdquo; to do while loops and procedural elements (WHEN/CASE etc). In today\u0026rsquo;s world, every production database is minimum SQL-99 compliant which makes them a valid programming language. For example, even sqlite\u0026rsquo;s SQL dialect is turing complete.\nLastly, even if we lived before 1999 and no SQL-99 existed, entire world\u0026rsquo;s data would be running on SQL since 1986. Why would people take pride of learning ORM abstractions rather than learning fundamentals of existing databases. And ORM abstractions will surely change more often and not provide the full feature sets of what you can achieve. It is just wishful thinking to ignore SQL and treat it as a chore rather than a powerful tool.\n","permalink":"http://occamist.dev/posts/sql-turing-completeness/","summary":"\u003cp\u003eSo I have been dying to write this blog post for a while, I feel great that I had some time to focus on this topic. Initially my primary reason to write this post is\nI see great negligence from very senior developers when they hear \u0026ldquo;SQL\u0026rdquo; and I came across a few very senior interviewers who said \u0026ldquo;SQL is not even programming language, we use ORMs, why bother learning something that is not even a programming language (they mean SQL)\u0026rdquo; so the main purpose of this post is to shed some light over ignorance/incompetence with the proof of \u0026ldquo;SQL indeed is a programming language\u0026rdquo; and why you must learn and practice it just like other programming languages.\u003c/p\u003e","title":"SQL: Turing Completeness"},{"content":"In this post, we will go over the most unknown 8 linters that are not used by the most people. These linters overall look unimportant, however they end up winning the hearts with their humbleness. If this sounds interesting, let\u0026rsquo;s start.\n8 Godox # .golangci.yaml linters-settings: godox: # Report any comments starting with keywords keywords: - TODO - BUG - FIXME - OPTIMIZE - HACK // TODO: what the hell is this // hacky logic here ❯ golangci-lint run ./... main.go:18:2: main.go:18: Line contains TODO/BUG/FIXME/OPTIMIZE/HACK: \u0026#34;TODO: what the hell is this\u0026#34; (godox) // TODO: what the hell is this Godox checks the comments and dunks a linter error if unwanted comment is written with specific keywords. I encountered this linter after an incident that caused a prod deployment blockage due to me deleting TODO and hacky code section that were passing on all tests except prod environment tests. Long story short, there was no way for me to avoid TODO section logic because my task was interferring with already written hacky solution.\nThe whole aim of this linter is to avoid this hackyness in the first place. Need a TODO? great, you can open a jira ticket and communicate with others instead of going the hacky route and throwing TODO for the next victim 😄 I am very big fan of this linter after that nasty experience.\n7 Gci # .golangci.yaml linters-settings: gci: custom-order: true sections: - standard # Standard section: captures all standard packages. - default # Default section: contains all imports that could not be matched to another section type. - prefix(github.com/myorg/myproject) # Custom section: groups all imports with the specified Prefix. import ( \u0026#34;context\u0026#34; \u0026#34;flag\u0026#34; \u0026#34;github.com/myorg/myproject/abc\u0026#34; \u0026#34;github.com/sourcegraph/conc\u0026#34; \u0026#34;log\u0026#34; \u0026#34;os\u0026#34; \u0026#34;runtime\u0026#34; ) ❯ golangci-lint run ./... main.go:6:1: File is not properly formatted (gci) \u0026#34;github.com/myorg/myproject/abc\u0026#34; ^ main.go:10:1: File is not properly formatted (gci) \u0026#34;runtime\u0026#34; ^ Gci is very similar to what goimports does but manages import blocks the custom way that respects standard, 3rd party and local project imports. Basically more strict goimports that is visually pleasant to look at.\nI had initially not considered something like this given the fact that goimports is good enough, but between different editors, I saw the issue that insertion of import line differs between X editor and Y editor due to automatic completions. On top of that goimports were passing as long as in the same block things were sorted. I didn\u0026rsquo;t like the idea of having standard, 3rd party and local project imports in 1 block since it turned into a soup of imports.\n6 Revive\u0026rsquo;s exported # .golangci.yaml linters-settings: revive: max-open-files: 2048 # Maximum number of open files at the same time. ignore-generated-header: false # When set to false, ignores files with \u0026#34;GENERATED\u0026#34; header, similar to golint. severity: warning # Sets the default severity. enable-all-rules: false # Enable all available rules. confidence: 0.8 # This means that linting errors with less than 0.8 confidence will be ignored. rules: - name: exported severity: warning disabled: false arguments: - \u0026#34;checkPrivateReceivers\u0026#34; - \u0026#34;sayRepetitiveInsteadOfStutters\u0026#34; func DoThat() { // some logic that is used by 3rd party person } ❯ golangci-lint run ./... main.go:18:1: exported: exported function DoThat should have comment or be unexported (revive) func DoThat() { ^ As you may know, revive is a meta-linter which means it contains a lot of linter rules. You may not like that since some features are from deprecated golint. Today\u0026rsquo;s most IDEs support what golint does or real kings such as staticcheck and govet do most of the work revive does. It is not recommended to run multiple meta-linters since there will be conflict.\nHowever, there is one unique linter rule of revive, it is called exported and helps you write down the comment for whatever is exported. This is quite useful as people forget to add comments for exported things. If it is a toy project, you probably don\u0026rsquo;t need this. But if you are designing a library for someone else, you better have comments for the public.\n5 Tparallel func TestScenarioOne(t *testing.T) { tests := []struct { name string }{ { name: \u0026#34;handles basic case\u0026#34;, }, { name: \u0026#34;handles edge case\u0026#34;, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { t.Parallel() call(tt.name) }) } } func TestScenarioTwo(t *testing.T) { t.Parallel() tests := []struct { name string }{ { name: \u0026#34;processes valid input\u0026#34;, }, { name: \u0026#34;handles invalid input\u0026#34;, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { call(tt.name) }) } } ❯ golangci-lint run ./... abc_test.go:140:6: TestScenarioTwo\u0026#39;s subtests should call t.Parallel (tparallel) func TestScenarioTwo(t *testing.T) { ^ abc_test.go:120:6: TestScenarioOne should call t.Parallel on the top level as well as its subtests (tparallel) func TestScenarioOne(t *testing.T) { ^ Tparallel Pretty self-descriptive, if you called t.Parallel() on top and have a table of tests that are not calling t.Parallel(), it warns you to call again during table tests. And you may have forgetten to call t.Parallel() on top but called only during table tests. Ensures correct usage of t.Parallel() basically.\n4 Usestdlibvars ❯ golangci-lint run ./... abc.go:18:30: \u0026#34;200\u0026#34; can be replaced by http.StatusOK abc.go:203:46: \u0026#34;POST\u0026#34; can be replaced by http.MethodPost (usestdlibvars) req, err := http.NewRequestWithContext(ctx, \u0026#34;POST\u0026#34;, URL, bytes.NewBufferString(formData)) Ever used something such as 200 or \u0026ldquo;POST\u0026rdquo; but forgot that these exist in standard library, meet usesstdlibvars, it is a nice eye candy which encourages more standard library usage.\n3 Usetesting # .golangci.yaml linters-settings: usetesting: os-create-temp: true # Enable/disable `os.CreateTemp(\u0026#34;\u0026#34;, ...)` detections. os-mkdir-temp: true # Enable/disable `os.MkdirTemp()` detections os-setenv: true # Enable/disable `os.Setenv()` detections. os-temp-dir: true # Enable/disable `os.TempDir()` detections. os-chdir: true # Enable/disable `os.Chdir()` detections. Disabled if Go \u0026lt; 1.24. context-background: true # Enable/disable `context.Background()` detections. Disabled if Go \u0026lt; 1.24. context-todo: true # Enable/disable `context.TODO()` detections. Disabled if Go \u0026lt; 1.24. ❯ golangci-lint run ./... abc_test.go:104:6: os.Setenv() could be replaced by t.Setenv() in TestStoreCreateABC (usetesting) _ = os.Setenv(\u0026#34;ABC\u0026#34;, \u0026#34;GET\u0026#34;) ^ this linter is quite new usetesting and supersedes the good old tenv , its aim is to purposely replace os and context operations with t *testing.T equivelants in your tests. I am looking forward to Go 1.24 so that all of ctx := context.Background() will be able to replaced by t.Context(), you can find more details about new t.Context() here\n2 Nilnil type something struct{} func searchSomething() (*something, error) { return nil, nil } ❯ golangci-lint run ./... abc.go:79:2: return both a `nil` error and an invalid value: use a sentinel error instead (nilnil) return nil, nil ^ the linter nilnil is here to avoid ambigious nil,nil returns. It could be a developer choice to do so. Famous Gorm has this issue which is a form of ambiguity in its public methods. It is like a code smell often.\nTechnically consistency matters before introducing semantic errors, however I find myself more aligned with semantic errors rather than nil, nil return type of person. I think having a linter that checks this breach of contract is quite nice and elegant. I haven\u0026rsquo;t seen any false positives with this linter ever.\n1 Wrapcheck func (q *Queries) ListHighscores(ctx context.Context) ([]Highscore, error) { rows, err := q.db.QueryContext(ctx, listHighscores) if err != nil { return nil, err } defer rows.Close() var items []Highscore for rows.Next() { var i Highscore if err := rows.Scan(\u0026amp;i.ID, \u0026amp;i.Username, \u0026amp;i.Score); err != nil { return nil, err } items = append(items, i) } if err := rows.Close(); err != nil { return nil, err } if err := rows.Err(); err != nil { return nil, err } return items, nil } ❯ go install github.com/tomarrell/wrapcheck/v2/cmd/wrapcheck@v2 ❯ wrapcheck ./... /home/occamist/Desktop/Hobby/highscore-api/repository/queries.sql.go:24:12: error returned from external package is unwrapped: sig: func (*database/sql.Row).Scan(dest ...any) error /home/occamist/Desktop/Hobby/highscore-api/repository/queries.sql.go:34:9: error returned from interface method should be wrapped: sig: func (github.com/occamist/highscore-api/repository.DBTX).ExecContext(context.Context, string, ...interface{}) (database/sql.Result, error) /home/occamist/Desktop/Hobby/highscore-api/repository/queries.sql.go:46:12: error returned from external package is unwrapped: sig: func (*database/sql.Row).Scan(dest ...any) error /home/occamist/Desktop/Hobby/highscore-api/repository/queries.sql.go:57:15: error returned from interface method should be wrapped: sig: func (github.com/occamist/highscore-api/repository.DBTX).QueryContext(context.Context, string, ...interface{}) (*database/sql.Rows, error) /home/occamist/Desktop/Hobby/highscore-api/repository/queries.sql.go:64:16: error returned from external package is unwrapped: sig: func (*database/sql.Rows).Scan(dest ...any) error /home/occamist/Desktop/Hobby/highscore-api/repository/queries.sql.go:69:15: error returned from external package is unwrapped: sig: func (*database/sql.Rows).Close() error /home/occamist/Desktop/Hobby/highscore-api/repository/queries.sql.go:72:15: error returned from external package is unwrapped: sig: func (*database/sql.Rows).Err() error /home/occamist/Desktop/Hobby/highscore-api/repository/queries.sql.go:92:12: error returned from external package is unwrapped: sig: func (*database/sql.Row).Scan(dest ...any) error Wrapcheck as the name suggests enforce you to wrap errors with useful information. It doesn\u0026rsquo;t check %v vs %w, it only checks you don\u0026rsquo;t do if err != nil { return err }, I actually quite like this linter because google styling guide enforces us to decorate the error with what\u0026rsquo;s being called such as fmt.Errorf(\u0026quot;something.Do(): %v\u0026quot;, err)\nOne fun fact, sqlc generated code suffers from this dizziness a lot 😄 next time you are thinking about code generation, I suggest you think at least 10 more times.\nThe Ending Thanks for reading, if you made it this far, I hope you learnt something new or productive. I have been using a huge bundle of linters for last 5 years. If you are interested in a golangci-linter config. Check out my gist here, this is based on my opinions so you can tweak accordingly based on your project.\n","permalink":"http://occamist.dev/posts/top-eight-underdog-linters-for-go/","summary":"\u003cp\u003eIn this post, we will go over the most unknown 8 linters that are not used by the most people. These linters overall look unimportant, however they end up winning the hearts with their humbleness. If this sounds interesting, let\u0026rsquo;s start.\u003c/p\u003e\n\u003ch2 id=\"8-godox\"\u003e8 Godox\u003c/h2\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"\u003e\u003ccode class=\"language-yaml\" data-lang=\"yaml\"\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e\u003cspan style=\"color:#75715e\"\u003e# .golangci.yaml\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e\u003cspan style=\"color:#f92672\"\u003elinters-settings\u003c/span\u003e:\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e  \u003cspan style=\"color:#f92672\"\u003egodox\u003c/span\u003e:\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e    \u003cspan style=\"color:#75715e\"\u003e# Report any comments starting with keywords\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e    \u003cspan style=\"color:#f92672\"\u003ekeywords\u003c/span\u003e:\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e      - \u003cspan style=\"color:#ae81ff\"\u003eTODO\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e      - \u003cspan style=\"color:#ae81ff\"\u003eBUG\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e      - \u003cspan style=\"color:#ae81ff\"\u003eFIXME\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e      - \u003cspan style=\"color:#ae81ff\"\u003eOPTIMIZE\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e      - \u003cspan style=\"color:#ae81ff\"\u003eHACK\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"\u003e\u003ccode class=\"language-go\" data-lang=\"go\"\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e\u003cspan style=\"color:#75715e\"\u003e// TODO: what the hell is this\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e\u003cspan style=\"color:#75715e\"\u003e// hacky logic here\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"\u003e\u003ccode class=\"language-shell\" data-lang=\"shell\"\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e❯ golangci-lint run ./...\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003emain.go:18:2: main.go:18: Line contains TODO/BUG/FIXME/OPTIMIZE/HACK: \u003cspan style=\"color:#e6db74\"\u003e\u0026#34;TODO: what the hell is this\u0026#34;\u003c/span\u003e \u003cspan style=\"color:#f92672\"\u003e(\u003c/span\u003egodox\u003cspan style=\"color:#f92672\"\u003e)\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e// TODO: what the hell is this\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003e\u003ca href=\"https://github.com/matoous/godox\"\u003eGodox\u003c/a\u003e checks the comments and dunks a linter error if unwanted comment is written with specific keywords. I encountered this linter after an incident that caused a prod deployment blockage due to me deleting TODO and hacky code section that were passing on all tests except prod environment tests. Long story short, there was no way for me to avoid TODO section logic because my task was interferring with already written hacky solution.\u003c/p\u003e","title":"Top 8 Underdog Linters for Go"},{"content":"In this post, I announce my newest tool for language learners. As long as I am learning a new language, I plan to maintain it.\nWhat is Laverna? What does it solve? Laverna is a sleek command-line tool that transforms text into spoken audio. All it needs is a config file to read. Whether you\u0026rsquo;re practicing Thai greetings, perfecting your Japanese pronunciation, or working on English phrases, Laverna can boost your productivity when you want to store individual language audios.\nGetting Started If you\u0026rsquo;re a Go user, simply run go install github.com/occamist/laverna@latest Not a Go developer? No problem! You can download ready-to-use binaries directly from releases.\nAfter you have installed, simply create a YAML file such as example.yaml\n- speed: normal voice: th text: \u0026#34;สวัสดีครับ\u0026#34; - speed: slower voice: en text: \u0026#34;Hello there\u0026#34; - speed: slowest voice: ja text: \u0026#34;こんにちは~\u0026#34; then pass along the command line as below.\nlaverna -file example.yaml if you fancy the flags, here they are;\nUsage of laverna: -file string filename path that is used for reading YAML file -workers int maximum number of concurrent downloads (default 20) Want to give Laverna a try this holiday season? Head over to Laverna GitHub Repository and you can start creating your personalized language learning audio. Happy Holidays and happy learning! 🎄🎧✨\nFound a bug? Spotted an issue? Have a brilliant idea to share? Don\u0026rsquo;t hesitate to give me a nudge!\n","permalink":"http://occamist.dev/posts/laverna-a-christmas-gift-for-language-learners/","summary":"\u003cp\u003eIn this post, I announce my newest tool for language learners. As long as I am learning a new language, I plan to maintain it.\u003c/p\u003e\n\u003ch2 id=\"what-is-laverna-what-does-it-solve\"\u003eWhat is Laverna? What does it solve?\u003c/h2\u003e\n\u003cp\u003eLaverna is a sleek command-line tool that transforms text into spoken audio. All it needs is a config file to read. Whether you\u0026rsquo;re practicing Thai greetings, perfecting your Japanese pronunciation, or working on English phrases, Laverna can boost your productivity when you want to store individual language audios.\u003c/p\u003e","title":"A Christmas Gift for Language Learners"},{"content":"The Problem Hello, in this short and sweet post, I will show a handy bash script to catch slow go tests based on their durations and their go cache status, but first we need to define the problem.\nOur current problem is when I run go test -v ./... we get simply huge output that is overwhelming with subtest outputs as well as t.Parallel pauses/continues. Plus, my inner OCD strikes when some things are always cached and some things are always not cached, I could never guess why and I never have time to look at plain log that has 1 million lines.\nSo I came up with a solution that mimicks better output and displays what is cached or not. You could run this in your CI/CD pipelines etc\u0026hellip; to keep track of test duration increases/decreases or test cache statuses.\nThe Solution One important note is that my solution only works for \u0026ldquo;passed\u0026rdquo; tests, if your tests are skipping or failing, my solution will ignore them.\nFor an example, I have used test file below replicated across some Guinea pig packages for representation purposes.\nimport ( \u0026#34;testing\u0026#34; \u0026#34;time\u0026#34; ) func TestQuickSleep(t *testing.T) { time.Sleep(100 * time.Millisecond) } func TestMediumSleep(t *testing.T) { time.Sleep(500 * time.Millisecond) } func TestLongSleep(t *testing.T) { t.Run(\u0026#34;Wait1.5Seconds\u0026#34;, func(t *testing.T) { time.Sleep(1500 * time.Millisecond) }) t.Run(\u0026#34;Wait2.5Seconds\u0026#34;, func(t *testing.T) { time.Sleep(2500 * time.Millisecond) }) } func TestParallelSleep(t *testing.T) { t.Parallel() time.Sleep(750 * time.Millisecond) } This was roughly outputting this verbose output which is what -v means sure, we could ignore using -v but then I have no idea which specific test case is causing issues and increasing its duration or ignoring its cache.\n❯ go test -v ./... ? github.com/lingua-sensei/lingua-sensei [no test files] === RUN TestQuickSleep --- PASS: TestQuickSleep (0.10s) === RUN TestMediumSleep --- PASS: TestMediumSleep (0.50s) === RUN TestLongSleep === RUN TestLongSleep/Wait1.5Seconds === RUN TestLongSleep/Wait2.5Seconds --- PASS: TestLongSleep (4.00s) --- PASS: TestLongSleep/Wait1.5Seconds (1.50s) --- PASS: TestLongSleep/Wait2.5Seconds (2.50s) === RUN TestParallelSleep === PAUSE TestParallelSleep === RUN TestParallel2Sleep === PAUSE TestParallel2Sleep === CONT TestParallelSleep === CONT TestParallel2Sleep --- PASS: TestParallel2Sleep (0.75s) --- PASS: TestParallelSleep (0.75s) PASS ok github.com/lingua-sensei/lingua-sensei/pkg1 (cached) === RUN TestParallel2Sleep === PAUSE TestParallel2Sleep === CONT TestParallel2Sleep --- PASS: TestParallel2Sleep (0.50s) PASS ok github.com/lingua-sensei/lingua-sensei/pkg2 (cached) === RUN TestParallel2Sleep === PAUSE TestParallel2Sleep === RUN TestParallel3Sleep === PAUSE TestParallel3Sleep === CONT TestParallel2Sleep === CONT TestParallel3Sleep --- PASS: TestParallel3Sleep (0.50s) --- PASS: TestParallel2Sleep (0.50s) PASS ok github.com/lingua-sensei/lingua-sensei/pkg2/pkg3 0.504s I have come up with this handy bash script below\n#!/bin/bash TEST_ARGS=\u0026#34;${@:-./...}\u0026#34; go test -v -json $TEST_ARGS \\ | jq -r \u0026#39; if .Action == \u0026#34;pass\u0026#34; and .Test != null then [.Package, .Test, (.Elapsed | tostring), \u0026#34;\u0026#34;] | join(\u0026#34;,\u0026#34;) elif .Action == \u0026#34;output\u0026#34; and (.Output | contains(\u0026#34;ok\u0026#34;)) then [.Package, \u0026#34;\u0026#34;, \u0026#34;\u0026#34;, .Output] | join(\u0026#34;,\u0026#34;) else empty end\u0026#39; \\ | awk -F, \u0026#39; BEGIN { line = \u0026#34;════════════════════════════════════════════════════════════════════════════════\u0026#34; printf \u0026#34;\\n%-60s %-8s %s\\n\u0026#34;, \u0026#34;Test Name\u0026#34;, \u0026#34;Duration\u0026#34;, \u0026#34;Status\u0026#34; printf \u0026#34;%s\\n\\n\u0026#34;, line } # Store results by package $2 != \u0026#34;\u0026#34; { pkg_tests[$1][$2] = $3 } # Store package cache status $4 ~ /ok.*/ { cached = ($4 ~ /cached/) ? \u0026#34;(cached)\u0026#34; : \u0026#34;\u0026#34; pkgs[$1] = cached } END { for (pkg in pkg_tests) { printf \u0026#34;%s:\\n\u0026#34;, pkg # Get all tests for each package and sort n = asorti(pkg_tests[pkg], sorted) for (i = 1; i \u0026lt;= n; i++) { test = sorted[i] printf \u0026#34; %-58s %6.2fs %s\\n\u0026#34;, test, pkg_tests[pkg][test], pkgs[pkg] } printf \u0026#34;\\n\u0026#34; } printf \u0026#34;%s\\n\u0026#34;, line }\u0026#39; What happens is we use jq to filter the json format of our go tests then we use awk to make it pretty and it ends up as a result below\nTest Name Duration Status ════════════════════════════════════════════════════════════════════════════════ github.com/lingua-sensei/lingua-sensei/pkg2/pkg3: TestParallel2Sleep 0.50s TestParallel3Sleep 0.50s github.com/lingua-sensei/lingua-sensei/pkg1: TestLongSleep 4.00s (cached) TestLongSleep/Wait1.5Seconds 1.50s (cached) TestLongSleep/Wait2.5Seconds 2.50s (cached) TestMediumSleep 0.50s (cached) TestParallel2Sleep 0.75s (cached) TestParallelSleep 0.75s (cached) TestQuickSleep 0.10s (cached) github.com/lingua-sensei/lingua-sensei/pkg2: TestParallel2Sleep 0.50s (cached) ════════════════════════════════════════════════════════════════════════════════ The Ending Thanks for reading, I swear I don\u0026rsquo;t like bazel and shady shell scripts lying around my repos.\nReferences Go: Find Slow Tests by Leigh ","permalink":"http://occamist.dev/posts/catching-slow-go-tests/","summary":"\u003ch2 id=\"the-problem\"\u003eThe Problem\u003c/h2\u003e\n\u003cp\u003eHello, in this short and sweet post, I will show a handy bash script to catch slow go tests based on their durations and their go cache status, but first we need to define the problem.\u003c/p\u003e\n\u003cp\u003eOur current problem is when I run \u003ccode\u003ego test -v ./...\u003c/code\u003e we get simply huge output that is overwhelming with subtest outputs as well as t.Parallel pauses/continues. Plus, my inner\nOCD strikes when some things are always cached and some things are always not cached, I could never guess why and I never have time to look at plain log that has 1 million lines.\u003c/p\u003e","title":"Catching Slow Go Tests"},{"content":"Getting Started In this tutorial, I will be showing how to package Go application for Arch Linux User Repository (AUR). We will be opening an AUR account and go through PKGBUILD template and follow Arch\u0026rsquo;s Wiki guidelines for Go. By the end of the tutorial, you will be able to upload your own Arch package that uses Go to AUR.\nThe Requirements Git Go Arch Linux x86_64 AUR account Setting up AUR account and SSH key We will fill up the username and the email in this form, as well as the most important one public ssh key. The rest are optional.\nGenerate and fill in the SSH public key..\n$ ssh-keygen -t rsa -b 4096 -C \u0026#34;your_email@example.com\u0026#34; $ cat ~/.ssh/aur_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDiniLTrxNbDH/R66BYUHieRT9sTqkn2678picCjF8MoTxsZume105hsDFSfg79pzfedY3iJQXMsCzk11pcnUNsGHxT/wh9s8aFrlI+n9JVMpEe7VOZqTLYyNBXtpAJUaY2Ptp4/l2p81dhpeCGTMhYNu2eDxaCaI5QvDOkyEmAZYAmuLT19OwJv8YW/1+1tg+Piaxyg/b+Dic7EeQQT10AI9drfRQG5pazREYkLGjClJP6pw/OnNcScMWR/Sd4phiz84DKnBLWXIIdbK+CDKDyFPt1FMIXkY1YSY+RAyXgJ3m1z6byCRs5BrN4RZArPcIEmVRRffkhq7tVBK0mwygTl8Hku60MqvdENLrylPcH3Ua2iqYLhftuMNfsZffUb9d5MI0BCaoQuzMfEMG0ZZZuDoZ38HZDjZbFFG0Fg+rt6IRTdRogZ0bzWacM0ig8J+HDnJnNIhXut5RC/f4W1RIXITujNp0blQRISrh9lGXcH/qz002ovcAoAd2yRkRdhh3NlP9mAZ/Rns47FwKP94ooG2/Zb2JRNJLJgdgaEWT+u1v5G4tPAoySxwZ0HBZSxSEmZC34piiPxdaHd6NAy3drt3Nt7QWVdBU9pD17lj8PsBuzXVReBkM+/0MFMLYDThunwVVhpZSHtmDTzWoGijIJnzaZrJMPcZZab/vF1WJ/yQ== talhaaltinel@hotmail.com After you copy pasted your public ssh key into SSH public key box, change your ~/.gitconfig\n[url \u0026#34;ssh://aur@aur.archlinux.org/\u0026#34;] insteadOf = https://aur.archlinux.org/ insteadOf = http://aur.archlinux.org/ Finally run the final command and input the answer of the form to confirm.\nUnderstanding PKGBUILD If you inspect /usr/share/pacman/PKGBUILD.proto, you will see the fields you can fill.\n# This is an example PKGBUILD file. Use this as a start to creating your own, # and remove these comments. For more information, see \u0026#39;man PKGBUILD\u0026#39;. # NOTE: Please fill out the license field for your package! If it is unknown, # then please put \u0026#39;unknown\u0026#39;. # Maintainer: Your Name \u0026lt;youremail@domain.com\u0026gt; pkgname=NAME pkgver=VERSION pkgrel=1 epoch= pkgdesc=\u0026#34;\u0026#34; arch=() url=\u0026#34;\u0026#34; license=(\u0026#39;GPL\u0026#39;) groups=() depends=() makedepends=() checkdepends=() optdepends=() provides=() conflicts=() replaces=() backup=() options=() install= changelog= source=(\u0026#34;$pkgname-$pkgver.tar.gz\u0026#34; \u0026#34;$pkgname-$pkgver.patch\u0026#34;) noextract=() md5sums=() validpgpkeys=() prepare() { cd \u0026#34;$pkgname-$pkgver\u0026#34; patch -p1 -i \u0026#34;$srcdir/$pkgname-$pkgver.patch\u0026#34; } build() { cd \u0026#34;$pkgname-$pkgver\u0026#34; ./configure --prefix=/usr make } check() { cd \u0026#34;$pkgname-$pkgver\u0026#34; make -k check } package() { cd \u0026#34;$pkgname-$pkgver\u0026#34; make DESTDIR=\u0026#34;$pkgdir/\u0026#34; install } In summary, you don\u0026rsquo;t have to fill all these fields but will be good to remember what they are.\npkgname and pkgversion are the name of the software package and the version of the software you are providing. pkgrel and epoch are additional way of subversioning the pkgversion, most of the time, you won\u0026rsquo;t use pkgdesc is the description for your software, arch is architecture and most of the time, it is just x86_64 url and license are the url of the software repository and the actual license name, these are quite important. groups provide us a way to install multiple software packages, think it like a gnome(very big software group) depends is runtime dependencies makedepends is compile-time dependencies checkdepends is check/test dependencies optdepends is optional runtime dependencies provides is used as alternative replacement for another package conflicts is used as to not install the conflicted package replaces is used as what this built package can replace backup lists files that should be backed up when upgrading the package options can include things like compiler flags, compression settings, etc. install can be used to specify custom installation scripts or commands to run during the installation process changelog does specify the location of a changelog or relase notes file source lists the source files required to build the package. It includes the URLs or file paths to the source code or other necessary files noextract lets you specify files that shouldn\u0026rsquo;t be extracted during the build process md5sums, b2sums, sha512sums, sha256sums are checksums, they can also be skipped if not needed validpgpkeys is also similar to above checksums. there are also 4 most common standard PKGBUILD shell functions such as;\nprepare() is used to make changes or apply patches to the source code before the build process begins build() is where the actual compilation or building of the software package takes place. check() runs tests to ensure that the software behaves as expected. It is used to verify the correctness of the package before installation package() puts the built files into a packaged format suitable for installation. It copies files into a temporary directory structure that mirrors the final installation directory. Create a .gitignore Since we will be building multiple times to confirm our package, I highly recommend a good .gitignore to not accidentally push your artifacts. Below is my .gitignore file.\n* !/.gitignore !/.SRCINFO !/PKGBUILD Basic PKGBUILD for Go From the thought of my mind, I have made a very simple PKGBUILD file for k3sup which is an amazing tool for bootstrapping k3s clusters. Please check k3sup out and support if you like.\nFriendly reminder for pkgname; xyzpackage means a build from stable version of the source, xyzpackage-git means a build from the latest commit of the source, xyzpackage-bin means fetching prebuilt binary without build phase.\n# Maintainer: Talha Altinel \u0026lt;talhaaltinel@hotmail.com\u0026gt; pkgname=k3sup pkgver=0.13.0 pkgrel=1 pkgdesc=\u0026#39;A tool to bootstrap K3s over SSH in \u0026lt; 60s\u0026#39; arch=(\u0026#39;x86_64\u0026#39;) url=\u0026#39;https://github.com/alexellis/k3sup\u0026#39; license=(\u0026#39;MIT\u0026#39;) depends=(\u0026#39;openssh\u0026#39;) makedepends=(\u0026#39;git\u0026#39; \u0026#39;go\u0026gt;=1.20\u0026#39;) source=(\u0026#34;${pkgname}-${pkgver}.tar.gz::https://github.com/alexellis/k3sup/archive/${pkgver}.tar.gz\u0026#34;) sha256sums=(\u0026#39;24939844ac6de581eb05ef6425c89c32b2d0e22800f1344c19b2164eec846c92\u0026#39;) _commit=(\u0026#39;1d2e443ea56a355cc6bd0a14a8f8a2661a72f2e8\u0026#39;) build() { cd \u0026#34;$pkgname-$pkgver\u0026#34; CGO_ENABLED=0 GOARCH=amd64 GOOS=linux go build \\ -ldflags \u0026#34;-s -w -X github.com/alexellis/k3sup/cmd.Version=$pkgver -X github.com/alexellis/k3sup/cmd.GitCommit=$_commit\u0026#34; \\ -o k3sup \\ . for shell in bash fish zsh; do ./k3sup completion \u0026#34;$shell\u0026#34; \u0026gt; \u0026#34;$shell-completion\u0026#34; done } package() { cd \u0026#34;$pkgname-$pkgver\u0026#34; install -vDm755 -t \u0026#34;$pkgdir/usr/bin\u0026#34; k3sup mkdir -p \u0026#34;${pkgdir}/usr/share/bash-completion/completions/\u0026#34; mkdir -p \u0026#34;${pkgdir}/usr/share/zsh/site-functions/\u0026#34; mkdir -p \u0026#34;${pkgdir}/usr/share/fish/vendor_completions.d/\u0026#34; install -vDm644 bash-completion \u0026#34;$pkgdir/usr/share/bash-completion/completions/k3sup\u0026#34; install -vDm644 fish-completion \u0026#34;$pkgdir/usr/share/fish/vendor_completions.d/k3sup.fish\u0026#34; install -vDm644 zsh-completion \u0026#34;$pkgdir/usr/share/zsh/site-functions/_k3sup\u0026#34; install -vDm644 -t \u0026#34;$pkgdir/usr/share/licenses/$pkgname\u0026#34; LICENSE } In my build phase, I compile the source to create a k3sup binary and I also run the binary to generate shell script completions. Give kudos to this functionality which comes from spf13/cobra Go library for CLIs.\nIn my package phase, I move the binary, the shell script completions and the license to correct places.\nThat all sounds cool and sweet but we are missing couple of things, so Arch Wiki has an extensive guide about this here but long story short I need a program called namcap and when I do sudo pacman -S namcap then run namcap on PKGBUILD and produced .zst archive.\n$ namcap ./PKGBUILD $ makepkg -s \u0026amp;\u0026amp; namcap k3sup-0.13.0-1-x86_64.pkg.tar.zst k3sup W: ELF file (\u0026#39;usr/bin/k3sup\u0026#39;) lacks FULL RELRO, check LDFLAGS. k3sup W: ELF file (\u0026#39;usr/bin/k3sup\u0026#39;) lacks PIE. k3sup W: Dependency included, but may not be needed (\u0026#39;openssh\u0026#39;) the biggest surprise was all of arch guides actually only allowed specifically built type of Go binaries with CGO :( the above PKGBUILD was completely valid but if you want your package in the official arch repositories outside of AUR, you need to ensure FULL RELRO and PIE are satisfied. I won\u0026rsquo;t be explaining these terms too much, it is essentially binary hardening for the extreme security.\nSecurity Hardened PKGBUILD for Go # Maintainer: Talha Altinel \u0026lt;talhaaltinel@hotmail.com\u0026gt; pkgname=k3sup pkgver=0.13.0 pkgrel=1 pkgdesc=\u0026#39;A tool to bootstrap K3s over SSH in \u0026lt; 60s\u0026#39; arch=(\u0026#39;x86_64\u0026#39;) url=\u0026#39;https://github.com/alexellis/k3sup\u0026#39; license=(\u0026#39;MIT\u0026#39;) depends=(\u0026#39;glibc\u0026#39; \u0026#39;openssh\u0026#39;) makedepends=(\u0026#39;git\u0026#39; \u0026#39;go\u0026gt;=1.20\u0026#39;) source=(\u0026#34;${pkgname}-${pkgver}.tar.gz::https://github.com/alexellis/k3sup/archive/${pkgver}.tar.gz\u0026#34;) sha256sums=(\u0026#39;24939844ac6de581eb05ef6425c89c32b2d0e22800f1344c19b2164eec846c92\u0026#39;) _commit=(\u0026#39;1d2e443ea56a355cc6bd0a14a8f8a2661a72f2e8\u0026#39;) build() { cd \u0026#34;$pkgname-$pkgver\u0026#34; export CGO_CPPFLAGS=\u0026#34;${CPPFLAGS}\u0026#34; export CGO_CFLAGS=\u0026#34;${CFLAGS}\u0026#34; export CGO_CXXFLAGS=\u0026#34;${CXXFLAGS}\u0026#34; export CGO_LDFLAGS=\u0026#34;${LDFLAGS}\u0026#34; export GOFLAGS=\u0026#34;-buildmode=pie -trimpath -mod=readonly -modcacherw\u0026#34; go build \\ -ldflags \u0026#34;-s -w -X github.com/alexellis/k3sup/cmd.Version=$pkgver -X github.com/alexellis/k3sup/cmd.GitCommit=$_commit\u0026#34; \\ -o k3sup \\ . for shell in bash fish zsh; do ./k3sup completion \u0026#34;$shell\u0026#34; \u0026gt; \u0026#34;$shell-completion\u0026#34; done } package() { cd \u0026#34;$pkgname-$pkgver\u0026#34; install -Dm755 -t \u0026#34;$pkgdir/usr/bin\u0026#34; k3sup mkdir -p \u0026#34;${pkgdir}/usr/share/bash-completion/completions/\u0026#34; mkdir -p \u0026#34;${pkgdir}/usr/share/zsh/site-functions/\u0026#34; mkdir -p \u0026#34;${pkgdir}/usr/share/fish/vendor_completions.d/\u0026#34; install -Dm644 bash-completion \u0026#34;$pkgdir/usr/share/bash-completion/completions/k3sup\u0026#34; install -Dm644 fish-completion \u0026#34;$pkgdir/usr/share/fish/vendor_completions.d/k3sup.fish\u0026#34; install -Dm644 zsh-completion \u0026#34;$pkgdir/usr/share/zsh/site-functions/_k3sup\u0026#34; install -Dm644 -t \u0026#34;$pkgdir/usr/share/licenses/$pkgname\u0026#34; LICENSE } Now we run namcap again to verify after we run makepkg\n$ namcap ./PKGBUILD $ makepkg -s \u0026amp;\u0026amp; namcap k3sup-0.13.0-1-x86_64.pkg.tar.zst  ✔  03:18:13   k3sup W: Dependency included, but may not be needed (\u0026#39;openssh\u0026#39;) now it all looks amazingly secure at a binary level if your glibc version doesn\u0026rsquo;t have a security vulnerability ;)\nlet\u0026rsquo;s push it to the AUR now. Remember to renew .SRCINFO before every push. Also pay attention to already taken package names in AUR.\n$ updpkgsums \u0026amp;\u0026amp; makepkg --printsrcinfo \u0026gt; .SRCINFO $ git init $ git remote add origin https://aur.archlinux.org/k3sup.git $ git add . \u0026amp;\u0026amp; git commit -m \u0026#34;initial release\u0026#34; $ git push -u origin master The End Result https://aur.archlinux.org/packages/k3sup $ git clone https://aur.archlinux.org/packages/k3sup $ cd ./k3sup \u0026amp;\u0026amp; less ./PKGBUILD $ makepkg -si The References Arch Wiki k9s goreleaser k3sup “Victory usually goes to the army who has better trained officers and men”\n— Sun Tzu\n","permalink":"http://occamist.dev/posts/packaging-go-for-arch-linux-tutorial/","summary":"\u003ch2 id=\"getting-started\"\u003eGetting Started\u003c/h2\u003e\n\u003cp\u003e     In this tutorial, I will be showing how to package Go application for \u003ca href=\"https://aur.archlinux.org/\"\u003eArch Linux User Repository (AUR)\u003c/a\u003e. We will be opening an AUR account and go through PKGBUILD template and follow Arch\u0026rsquo;s Wiki guidelines for Go. By the end of the tutorial, you will be able to upload your own Arch package that uses Go to AUR.\u003c/p\u003e\n\u003ch2 id=\"the-requirements\"\u003eThe Requirements\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eGit\u003c/li\u003e\n\u003cli\u003eGo\u003c/li\u003e\n\u003cli\u003eArch Linux x86_64\u003c/li\u003e\n\u003cli\u003eAUR account\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"setting-up-aur-account-and-ssh-key\"\u003eSetting up AUR account and SSH key\u003c/h2\u003e\n\u003cp\u003eWe will fill up the username and the email in this form, as well as the most important one public ssh key. The rest are optional.\u003c/p\u003e","title":"Packaging Go for Arch Linux Tutorial"},{"content":"Getting Started Hello everyone, in this blog I will help you bootstrap your arch linux setup in 5-10 minutes, and teach you where you can look into when you need help. Arch Linux has been one of the most difficult distros to setup until the new convenient archinstall script. I will be using archinstall script in this guide. It is known with its non-exhaustive user friendly TUI installation phase.\nFirst of all, ensure you install the ISO from Download Page, Arch is used within the whole world so don\u0026rsquo;t be scared of picking the closest mirror to yourself. All the mirrors have SSL/TLS enabled, the contents are the same you don\u0026rsquo;t need to worry about it. Also archlinux-x86_64.iso and archlinux-YYYY.MM.DD-x86_64.iso correspond to the same ISO, there is no difference.\nSecond of all, you need a software to burn the ISO, I used to use fedora media writer or opensuse image writer. But Balena Etcher is the cross-platform option for your own OS, after you insert the USB flash stick, just select the ISO and click burn or write.\nThe Requirements USB flash stick (recommended space \u0026gt;= 2G) ISO burner/ISO writer software (Balena Etcher) The internet (either with reliable ethernet or wifi) The keyboard Setting up with Archinstall After your burned the ISO then you plug in the USB flash stick and start the computer, you should see the classic Arch boot menu.\nFirstly, you will need an internet connection, if you have ethernet you can just plug the ethernet cable and continue. If you have wifi, you need to type a few commands then enter the password(passphrase).\nroot@archiso $ iwctl [iwd] $ help [iwd] $ device list [iwd] $ station wlan0 scan [iwd] $ station wlan0 get-networks [iwd] $ station wlan0 connect YOUR_NETWORK_NAME Passphrase: ********* [iwd] $ station wlan0 show [iwd] $ exit Then we write and enter archinstall to join TUI installation phase. There are couple of options presented to us;\nroot@archiso $ archinstall Language I pick English.\nMirror I pick the closest mirror to me as a country.\nLocales I pick my locales for my keyboard.\nDisk configuration I pick BTRFS with compression/subvolumes enabled and auto disk partitioning. Please skip the disk encryption as it complicates many things.\nBootloader I like the standard GRUB.\nSwap Always yes.\nHostname/Root Password/User Account Profile I pick Desktop/Gnome/all open-source drivers. You may need to pay attention to the drivers if you have NVIDIA driver.\nAudio Pipewire for latest hardware, pulse for very old hardware.\nKernel linux is the normal latest one. (I prefer this) linux-hardened is full of security therefore has lots of restraints. linux-lts is LTS (old) kernel. linux-zen is for performance machines with higher power usage such as gaming. Network Configuration Pick network manager if you are using GNOME or KDE.\nTimezone / Automatic Time Sync I pick my timezone and enable automatic time sync as the clock changes in winter and summer times.\nInstall Finally click on install and enjoy, you can remove the flash stick after it asks you to do so when installation finishes.\nTuning Pacman Pacman is the fastest package manager in the unixverse, but a little bit of tuning is required to make it skyrocket out of the roof. Big warning, once it is tuned, you will never enjoy mediocre package managers.\nIn /etc/pacman.conf, change #color row to color, add a below row under it ILoveCandy, also change #ParallelDownloads = 5 to ParallelDownloads = 10, you can make the number higher as well if your internet is good.\nWe also need to set up paccache (pacman cache cleaner for weekly)\n$ sudo pacman -S pacman-contrib $ sudo systemctl enable paccache.timer Lastly, let\u0026rsquo;s edit /etc/makepkg.conf, change MAKEFLAGS=\u0026quot;-j4\u0026quot; to MAKEFLAGS=\u0026quot;-j$(nproc)\u0026quot;\nPacman Cheatsheet dnf upgrade equivelant, upgrades your whole system including the kernel\n$ pacman -Syu dnf search equivelant, only searches through official repos, not AUR\n$ pacman -Ss xPackage dnf install equivelant, installs the package to the system\n$ pacman -S xPackage dnf remove equivelant, removes the package from the system\n$ pacman -Rs xPackage dnf list --all equivelant, lists all the packages\n$ pacman -Q Using AUR for Unofficial Packages I will be showing how you can install Brave through AUR, AUR is everyone\u0026rsquo;s ship to upload packages, you can make packages for anyone and upload it to AUR, so we need to be EXTREMELY careful while using AUR.\nWe will be downloading Brave from brave-bin, common Arch package convention is to rename packaged software accordingly;\nabc (means build from stable version of the source) abc-git (means build from main branch of the source) abc-bin (means pre-built stable version binary) We inspect URLs in PKGBUILD file for brave-bin to ensure we are not downloading the next crypto miner bot for our GPU.\n$ mkdir AUR \u0026amp;\u0026amp; cd ~/AUR $ git clone https://aur.archlinux.org/brave-bin.git $ cd ./brave-bin $ less PKGBUILD $ makepkg -si Important note: DO NOT ever install an AUR helper, it is your job as an user to check every single AUR package before installing something, if something is not in official arch repo, use AUR wisely or even contribute to it!\nEnable Microcode Microcode is a very low-level instruction set which is stored permanently in a computer, it helps your processor\u0026rsquo;s power and computing efficiency.\nFor intel intel-ucode, for AMD amd-ucode\n$ sudo pacman -S intel-ucode Enable Bluetooth Sometimes bluetooth may not work on Linux in general but since you are on Arch, you just need a single package to make it work for every bluetooth driver out there. Remember this is not Linux\u0026rsquo;s fault, it is the fault of hardware manufacturers due to not open sourcing bluetooth drivers.\nsudo pacman -S bluez bluez-utils systemctl start bluetooth sudo systemctl status bluetooth sudo systemctl enable bluetooth ● bluetooth.service - Bluetooth service Loaded: loaded (/usr/lib/systemd/system/bluetooth.service; enabled; preset: disabled) Active: active (running) since Tue 2023-07-01 11:31:45 BST; 1 day 7h ago Docs: man:bluetoothd(8) Main PID: 633 (bluetoothd) Status: \u0026#34;Running\u0026#34; Tasks: 1 (limit: 76824) Memory: 3.0M CPU: 46.646s CGroup: /system.slice/bluetooth.service └─633 /usr/lib/bluetooth/bluetoothd Disable GRUB Screen The thing is when you open your computer and see GRUB screen everytime, it could be annoying unless you have multiple OSes in a single machine. So I recommend to change your /etc/default/grub. Under GRUB settings, I change GRUB_TIMEOUT_STYLE=countdown to GRUB_TIMEOUT_STYLE=hidden to get rid of waiting on GRUB screen for 5 seconds. You can still visit GRUB screen with special Fx key on your machine.\nMy Personal Taste of Software (Optional) $ sudo pacman -S wget curl git neofetch sl cowsay fortune-mod lolcat nmap mandoc vlc thunderbird obsidian discord gimp converseen calibre libreoffice-still go kubectl docker terraform rsync nodejs pnpm vscode Installing vscode on arch, requires a small metadata patching for vscode marketplace as this vscode is not closely connected to Microsoft telemetry or Microsoft binary directly. We will use AUR for this purpose, please inspect the contents of PKGBUILD and other scripts for your own safety all the time. If we don\u0026rsquo;t patch our vscode marketplace, some plugins will be missing. please ALWAYS inspect the contents please.\n$ cd ~/AUR $ git clone https://aur.archlinux.org/code-marketplace.git $ cd ./code-marketplace $ less PKGBUILD $ makepkg -si Install Zsh (Optional) Zsh is the fully fledged shell for end-users. You can install by doing sudo pacman -S zsh then run chsh -s $(which zsh)\nAfter you have installed zsh, we will be theming with ohmyzsh and powerlevel10k and also couple of shell plugins such as zsh completion, zsh autosuggestion, zsh highlighting\nThe installation of ohmyzsh is quite straight forward but please do INSPECT the shell script for the peace of the mind;\n$ wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh $ less install.sh $ sh install.sh Logout and login again and open your shell to complete prompted options.\nThe installation of powerlevel10k is a bit more different. You need to install manually these 4 fonts; MesloLGS NF \u0026ldquo;Regular, Bold, Italic, Bold Italic\u0026rdquo; then double click each .ttf file and click install. Here is the link to the fonts\nLet\u0026rsquo;s now install powerlevel10k through our ohmyzsh settings.\n$ git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k Now we can set ZSH_THEME=\u0026ldquo;powerlevel10k/powerlevel10k\u0026rdquo; in ~/.zshrc file. And simply run exec zsh \u0026amp;\u0026amp; p10k configure.\nAt least but not the last, if you check out zsh users repository we have 3 must zsh productivity plugins; completions, highlighting and autosuggestions. Let\u0026rsquo;s install them one by one.\nWe install zsh-completions through ohmyzsh again. This will give you a custom CLI completion if a program provides its zsh completion.\n$ git clone https://github.com/zsh-users/zsh-completions ${ZSH_CUSTOM:-${ZSH:-~/.oh-my-zsh}/custom}/plugins/zsh-completions Add it to FPATH in your .zshrc by adding the following line before source \u0026ldquo;$ZSH/oh-my-zsh.sh\u0026rdquo;\nfpath+=${ZSH_CUSTOM:-${ZSH:-~/.oh-my-zsh}/custom}/plugins/zsh-completions/src Then we install zsh-syntax-highlighting and zsh-autosuggestions. This will give you correct text highlighting and autosuggestions from the previous terminal history.\n$ git clone https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting $ git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions Now modify plugins= in your ~/.zshrc file.\nplugins=( # other plugins... zsh-syntax-highlighting zsh-autosuggestions ) Additional Help If you get stuck or encounter a very difficult problem, you can check out the forums, arch forums/wiki/AUR is what makes arch the best distro in the unixverse.\n“Power to the people”\n— Unknown\n","permalink":"http://occamist.dev/posts/arch-installation-for-beginners/","summary":"\u003ch2 id=\"getting-started\"\u003eGetting Started\u003c/h2\u003e\n\u003cp\u003e     Hello everyone, in this blog I will help you bootstrap your arch linux setup in 5-10 minutes, and teach you where you can look into when you need help. Arch Linux has been one of the most difficult distros to setup until the new convenient \u003ccode\u003earchinstall\u003c/code\u003e script.\nI will be using \u003ccode\u003earchinstall\u003c/code\u003e script in this guide. It is known with its non-exhaustive user friendly TUI installation phase.\u003c/p\u003e\n\u003cp\u003eFirst of all, ensure you install the ISO from \u003ca href=\"https://archlinux.org/download/\"\u003eDownload Page\u003c/a\u003e, Arch is used within the whole world so don\u0026rsquo;t be scared of picking the closest mirror to yourself. All the mirrors have SSL/TLS enabled, the contents are the same you don\u0026rsquo;t need to worry about it. Also \u003ccode\u003earchlinux-x86_64.iso\u003c/code\u003e and \u003ccode\u003earchlinux-YYYY.MM.DD-x86_64.iso\u003c/code\u003e correspond to the same ISO, there is no difference.\u003c/p\u003e","title":"Arch Installation for Beginners"},{"content":"The Intro In this guide, I will be showing how to set up a simple Kubernetes(K3S) cluster which will have 1 master node and 2 worker nodes on Hetzner Cloud. My main goal is to make newcomers\u0026rsquo; transition to Kubernetes very smooth as a person who suffered enough with complex tutorials/bills and didn\u0026rsquo;t get enough chance to poke a Kubernetes cluster.\nThis tutorial should be applicable to any cloud provider but be warned pricing would be extremely different. If you come to learn Kubernetes, this could be your starting point to set up your own cluster and get started poking around with an actual production-ready cluster with k3s.\nQuick QA What is K3S? It is a production-ready, stable and lightweight flavor of Kubernetes, think it is like Debian being a flavor of Linux. It is also the best choice for learning multi-master and multi-worker node architecture.\nWhy not teach us minikube/kind/microk8s? They are not good enough for production workloads.\nWhy Hetzner cloud? It is super cheap and simple cloud UI for me, you can do the same things in Vultr, Linode, Digital Ocean. Note: I am not sponsored by Hetzner Cloud\nDo I need to install docker? Not needed because k3s binaries ship with everything that is needed\nWhat is the difference between a master node and a worker node? A master node is often referred to a K3S server and a worker node is often referred to a K3S agent. For high availability(HA), the recommendation is to have at least 3 master nodes, 3 worker nodes, and 1 managed database outside of your master node instead of having an embedded SQLite database.\nSetup your SSH key, network and compute instances Before we create our compute instance(VPS), we will need SSH key setup, private network setup. Let\u0026rsquo;s quickly go over it. Let\u0026rsquo;s generate our ssh-key 🗝️\nLet\u0026rsquo;s add our ssh-key to our local machine and public ssh-key to the cloud UI. This panel may be different depending on your cloud provider. 🗝️\nAfter that, let\u0026rsquo;s quickly create our private network which will be used for our compute instances for the cluster\u0026rsquo;s nodes communication between the master and the worker nodes. ☎️\nWe are finally ready to create our compute instances. Now I will create my master node which can also be called as k3s server. I will call my master node\u0026rsquo;s name \u0026ldquo;jack-sparrow\u0026rdquo;. I will pick \u0026ldquo;Debian 11\u0026rdquo; as my Linux distro choice for rock-solid server stability and being a reliable open source project.\nI will also take advantage of multiple instance creation and set the instance count to 3. The other 2 instances will be my worker nodes which can also be called as k3s agents. I will call them \u0026ldquo;black-pearl\u0026rdquo; and \u0026ldquo;flying-dutchman\u0026rdquo;. If you want to extend your worker nodes, you can keep going with all the ship names from Pirates of the Caribbean. For master nodes, I will be using captain names 🏴‍☠️\nI have picked CX11 instance which is the cheapest option available. 6GB RAM and 60GB SSD should be sufficient enough for most of your projects. I skipped additional volume and the firewall. I added my created network and my created SSH key. Remember, this is for broke captains. 🚢\nPrerequisites for K3S SSH into the master node and the worker nodes. Update /etc/hosts files before we get our great k3s binaries which has everything including containerd runtime and CNI(container network interface). 🐋\n$ ssh root@94.130.227.124 $ apt update \u0026amp;\u0026amp; sudo apt upgrade $ apt install apparmor apparmor-utils // Debian dependency for the Kernel After you have updated every nodes\u0026rsquo; /etc/hosts file with GNU nano. Optionally you can install nmap CLI tool to make sure your network is functioning properly and other instances are connected through the web with nmap -sn 10.0.0.1/24 🕸️\nInstallation of K3S In jack-sparrow instance(master node), let\u0026rsquo;s install k3s server.\n$ curl -sfL https://get.k3s.io | sh - $ systemctl status k3s.service ● k3s.service - Lightweight Kubernetes Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2021-12-04 17:39:07 UTC; 3min 0s ago Docs: https://k3s.io Process: 26339 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS) Process: 26341 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS) Process: 26342 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS) Main PID: 26343 (k3s-server) Tasks: 18 Memory: 502.4M CPU: 27.604s CGroup: /system.slice/k3s.service └─26343 /usr/local/bin/k3s server $ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME jack-sparrow Ready control-plane,master 4m4s v1.21.7+k3s1 94.130.227.124 \u0026lt;none\u0026gt; Debian GNU/Linux 11 (bullseye) 5.10.0-9-amd64 docker://20.10.11 Let\u0026rsquo;s configure incoming/outgoing ports for jack-sparrow, our k3s server. Also obtain the token which will be used during k3s agents setup.\n$ apt install ufw $ ufw default allow outgoing $ ufw default deny incoming $ ufw allow 22,80,443,6443,10250/tcp $ ufw --force enable Firewall is active and enabled on system startup $ cat /var/lib/rancher/k3s/server/node-token bc3f7dee0308f09e5a3645f4b06343eea2644296cdK1d79a977d0e193a10187497f::server:9ae1e45b8b58be56a8282a84c7e3715b Let\u0026rsquo;s install k3s agents in our computing instances which are called black-pearl and flying-dutchman! Our master IP (jack-sparrow) is 10.0.0.4 in our nebula network. And our token is bc3f7dee0308f09e5a3645f4b06343eea2644296cdK1d79a977d0e193a10187497f::server:9ae1e45b8b58be56a8282a84c7e3715b\ncurl -sfL http://get.k3s.io | K3S_URL=https://10.0.0.4:6443 K3S_TOKEN=bc3f7dee0308f09e5a3645f4b06343eea2644296cdK1d79a977d0e193a10187497f::server:9ae1e45b8b58be56a8282a84c7e3715b sh - [INFO] Finding release for channel stable [INFO] Using v1.21.7+k3s1 as release [INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.7+k3s1/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.7+k3s1/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Skipping installation of SELinux RPM [INFO] Creating /usr/local/bin/kubectl symlink to k3s [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Creating /usr/local/bin/ctr symlink to k3s [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service [INFO] systemd: Enabling k3s-agent unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service. [INFO] systemd: Starting k3s-agent Perfect! Let\u0026rsquo;s jump into Jack Sparrow! and see what we got!\nroot@jack-sparrow:~# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME jack-sparrow Ready control-plane,master 50m v1.21.7+k3s1 94.130.227.124 \u0026lt;none\u0026gt; Debian GNU/Linux 11 (bullseye) 5.10.0-9-amd64 docker://20.10.11 black-pearl Ready \u0026lt;none\u0026gt; 28s v1.21.7+k3s1 116.203.32.141 \u0026lt;none\u0026gt; Debian GNU/Linux 11 (bullseye) 5.10.0-9-amd64 docker://20.10.11 flying-dutchman Ready \u0026lt;none\u0026gt; 12s v1.21.7+k3s1 116.203.90.71 \u0026lt;none\u0026gt; Debian GNU/Linux 11 (bullseye) 5.10.0-9-amd64 docker://20.10.11 Extras(Install LENS) Lens is a Kubernetes UI for managing your cluster resources. It comes bundled with Helm and kubectl for your local workstation. You can install lens binary from the github under the name lensapp/lens. We will be taking the kube config from jack sparrow and pasting it into your Lens. To do that let\u0026rsquo;s find our kube config and copy paste. And change the server IP address to external IP address of our jack sparrow instead of 127.0.0.1\nroot@jack-sparrow:~# cat /etc/rancher/k3s/k3s.yaml Let\u0026rsquo;s create pirate-deployment.yaml, pirate-service.yaml and traefik-ingress.yaml to see generated lens metrics. And apply them in our cluster.\n# pirate-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: pirate-deploy spec: replicas: 3 selector: matchLabels: app: pirate-app template: metadata: labels: app: pirate-app spec: containers: - name: simple-pirate image: occamist/simple-pirate ports: - containerPort: 80 # pirate-service.yaml apiVersion: v1 kind: Service metadata: name: pirate-svc spec: ports: - protocol: TCP port: 80 targetPort: 80 selector: app: pirate-app # traefik-ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: pirate-ingress annotations: kubernetes.io/ingress.class: traefik spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: pirate-svc port: number: 80 $ kubectl create -f pirate-deployment.yaml $ kubectl create -f pirate-service.yaml $ kubectl create -f traefik-ingress.yaml $ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE pirate-ingress \u0026lt;none\u0026gt; * 116.203.32.141,116.203.90.71,94.130.227.124 80 136m Now we can enable the lens metrics from pinned clusters and go to its settings and install all of the required things via lens metrics tab. We will need prometheus, kube state metrics and node exporter from lens metrics section.\nThe End I really thank you for making it to the end! I tried to simplify things as much as possible 🙂\nI want to send special thanks and give credit to Victor Shamallah and Alex Ellis I also hope this guide was helpful to the readers and the newcomers. If you have learned something new, feel free to share. If you have any feedback/suggestions/problems, spam in the comments section. Have a good day!\nReferences Install K3S on Ubuntu with Docker K3S Installation Requirements Kubernetes Ingress - Traefik ","permalink":"http://occamist.dev/posts/broke-captains-kubernetes-cluster-guide/","summary":"\u003ch2 id=\"the-intro\"\u003eThe Intro\u003c/h2\u003e\n\u003cp\u003eIn this guide, I will be showing how to set up a simple Kubernetes(K3S) cluster which will have 1 master node and 2 worker nodes on Hetzner Cloud. My main goal is to make newcomers\u0026rsquo; transition to Kubernetes very smooth as a person who suffered enough with complex tutorials/bills and didn\u0026rsquo;t get enough chance to poke a Kubernetes cluster.\u003c/p\u003e\n\u003cp\u003eThis tutorial should be applicable to any cloud provider but be warned pricing would be extremely different. If you come to learn Kubernetes, this could be your starting point to set up your own cluster and get started poking around with an actual production-ready cluster with k3s.\u003c/p\u003e","title":"Broke Captain's Kubernetes Cluster Guide(super simple \u0026 convenient cost)"},{"content":"The Intro Hi everyone, this is the 2nd part of the series, we will be developing our API in this part. I will assume you have already followed the previous part and setup faasd and CockroachDB in your cloud server instance and have faas-cli in your both client computer and cloud server instance. I will also assume you have Go on your computer and a proper text editor. Let\u0026rsquo;s quickly get started.\nhighscore-api-github-repo\nRequirements:\nGo knowledge docker hub account faas-cli up and running faasd server basic SQL knowledge First, we would like to make sure your faas-cli works correctly in your server, you should already know your server IP address, your username and your password for faasd. Let\u0026rsquo;s see if the server instance validates us.\nfaas-cli login -g http://23.88.60.124:8080 -u admin -p jackthegiant Faasd Project Init faas-cli template store pull golang-http faas-cli new --lang golang-http get-highscores The above command will create a yml file and a function handler that we will have to adjust for faasd. As an initial clean up, I will rename my get-highscores.yml to stack.yml, this file will contain our functions for faasd. It is general practice to have it as stack.yml because you will need 1 less flag during faas-cli up -f filename.yml\nI will also change the provider\u0026rsquo;s gateway to my server cloud instance which is http://[[SERVER_IP]]:8080.In my case, It is http://23.88.60.124:8080.\nThe other most important part is to give your docker hub container name to image names and turn on go modules in environment variables. Here is what it looks like after tidying up stack.yml. Make sure you login to your docker hub account and create a repository there first\nversion: 1.0 provider: name: openfaas gateway: http://23.88.60.124:8080 functions: get-highscores: lang: golang-http handler: ./get-highscores image: occamist/get-highscores:latest build_args: GO111MODULE: on environment: POSTGRES_HOST: 23.88.60.124 POSTGRES_PORT: 26257 POSTGRES_USER: root POSTGRES_DB: highscore_db Now you have the initial configuration setup, let\u0026rsquo;s deploy your generated handler to see if it is getting deployed. Your template code should look like this. The best part is now you can deploy very easily with a single command. This single up command will build your container(faas-cli build), deploy your code to the container registry(faas-cli push) then pull that container to your cloud server(faas-cli deploy) instance.\nget-highscores/handler.go\npackage function import ( \u0026#34;fmt\u0026#34; \u0026#34;net/http\u0026#34; handler \u0026#34;github.com/openfaas/templates-sdk/go-http\u0026#34; ) // Handle a function invocation func Handle(req handler.Request) (handler.Response, error) { var err error message := fmt.Sprintf(\u0026#34;Body: %s\u0026#34;, string(req.Body)) return handler.Response{ Body: []byte(message), StatusCode: http.StatusOK, }, err } docker login faas-cli up You can additionally use faas-cli list to see running functions. Now I will grab sqlc to generate a repository layer for our Go function handler. To use sqlc, you will install its CLI, sqlc.json file which will point to our queries.sql and schema.sql\ngo get github.com/kyleconroy/sqlc/cmd/sqlc Here is how my sqlc.json, schema.sql and queries.sql look like. If you don\u0026rsquo;t know basic SQL, I strongly suggest you to visit W3C SQL docs for quick recap and have a look at sqlc docs\nsqlc.json\n{ \u0026#34;version\u0026#34;: \u0026#34;1\u0026#34;, \u0026#34;packages\u0026#34;: [ { \u0026#34;path\u0026#34;: \u0026#34;repository\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;repository\u0026#34;, \u0026#34;queries\u0026#34;: \u0026#34;queries.sql\u0026#34;, \u0026#34;schema\u0026#34;: \u0026#34;schema.sql\u0026#34; } ] } schema.sql\nCREATE TABLE highscores ( id BIGSERIAL PRIMARY KEY, username TEXT NOT NULL UNIQUE, score BIGINT NOT NULL ); queries.sql\n-- name: GetHighscore :one SELECT * FROM highscores WHERE username = $1 LIMIT 1; -- name: ListHighscores :many SELECT * FROM highscores ORDER BY score; -- name: CreateHighscore :one INSERT INTO highscores(username, score) VALUES ($1, $2) RETURNING *; -- name: UpdateHighscore :one UPDATE highscores SET score = $2 WHERE id = $1 RETURNING *; -- name: DeleteHighscore :exec DELETE FROM highscores WHERE username = $1; Now we can generate our repository layer since we have completed all of the database interactions. The below command will generate all of the repository code for Go from SQL.\nsqlc generate I will initialize go modules and get pq which is a pure Go postgres driver. Why do we use postgres driver for CockroachDB? CockroachDB supports PostgreSQL wire protocol. This means it is almost fully compatible with postgres drivers and ORMs.\ngo mod init github.com/occamist/highscore-api go get github.com/lib/pq Let\u0026rsquo;s finish up our handler for get-highscores. I will establish a database connection and check for the correct HTTP method. I will also check if there is a username query for the highscore. If yes, I will return a specific user\u0026rsquo;s highscore. Otherwise, I will return all of the highscores in the database. Please make sure to import lib/pq manually.\nget-highscores/handler.go\npackage function import ( \u0026#34;database/sql\u0026#34; \u0026#34;encoding/json\u0026#34; \u0026#34;fmt\u0026#34; \u0026#34;log\u0026#34; \u0026#34;net/http\u0026#34; \u0026#34;net/url\u0026#34; \u0026#34;os\u0026#34; \u0026#34;strings\u0026#34; _ \u0026#34;github.com/lib/pq\u0026#34; \u0026#34;github.com/occamist/highscore-api/repository\u0026#34; handler \u0026#34;github.com/openfaas/templates-sdk/go-http\u0026#34; ) func Handle(req handler.Request) (handler.Response, error) { db, err := sql.Open(\u0026#34;postgres\u0026#34;, fmt.Sprintf(\u0026#34;host=%s port=%s user=%s dbname=%s sslmode=disable\u0026#34;, os.Getenv(\u0026#34;POSTGRES_HOST\u0026#34;), os.Getenv(\u0026#34;POSTGRES_PORT\u0026#34;), os.Getenv(\u0026#34;POSTGRES_USER\u0026#34;), os.Getenv(\u0026#34;POSTGRES_DB\u0026#34;))) defer func() { err = db.Close() if err != nil { log.Printf(\u0026#34;failed to close db: %v\u0026#34;, err) } }() if err != nil { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to connect to db: %v\u0026#34;, err) } if req.Method != http.MethodGet { return handler.Response{ StatusCode: http.StatusBadRequest, }, fmt.Errorf(\u0026#34;invalid http method %s\u0026#34;, req.Method) } values, err := url.ParseQuery(req.QueryString) if err != nil { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to parse query string: %v\u0026#34;, err) } var rawBody []byte queries := repository.New(db) username := values.Get(\u0026#34;username\u0026#34;) if strings.TrimSpace(username) != \u0026#34;\u0026#34; { highscore, err := queries.GetHighscore(req.Context(), username) if err != nil { if err == sql.ErrNoRows { return handler.Response{ StatusCode: http.StatusNotFound, }, nil } return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to get highscore for username %s: %v\u0026#34;, username, err) } rawBody, err = json.Marshal(highscore) if err != nil { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to marshal a highscore: %v\u0026#34;, err) } } else { highscores, err := queries.ListHighscores(req.Context()) if err != nil { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to list highscores: %v\u0026#34;, err) } rawBody, err = json.Marshal(highscores) if err != nil { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to marshal highscores: %v\u0026#34;, err) } } return handler.Response{ Body: rawBody, StatusCode: http.StatusOK, }, nil } Now I will create my second function and create its docker hub repo and tidy up stack.yml. I will also add a token credential so that not everyone can add highscore to my database.\nfaas-cli new --lang golang-http post-highscore --append stack.yml version: 1.0 provider: name: openfaas gateway: http://23.88.60.124:8080 functions: get-highscores: lang: golang-http handler: ./get-highscores image: occamist/get-highscores:latest build_args: GO111MODULE: on environment: POSTGRES_HOST: 23.88.60.124 POSTGRES_PORT: 26257 POSTGRES_USER: root POSTGRES_DB: highscore_db post-highscore: lang: golang-http handler: ./post-highscore image: occamist/post-highscore:latest build_args: GO111MODULE: on environment: POSTGRES_HOST: 23.88.60.124 POSTGRES_PORT: 26257 POSTGRES_USER: root POSTGRES_DB: highscore_db BEARER_TOKEN: QeV5f7eSvJnO0dDYCc9DcH5BEwpm7P3j I will create a package called model and middleware. My model will only contain how a request should look like and my middleware will look like a basic auth header check against our specified BEARER_TOKEN env variable.\nmodel/highscore.go\npackage model type Highscore struct { Username string `json:\u0026#34;username\u0026#34;` Score int64 `json:\u0026#34;score\u0026#34;` } middleware/auth.go\npackage middleware import ( \u0026#34;errors\u0026#34; \u0026#34;os\u0026#34; \u0026#34;strings\u0026#34; handler \u0026#34;github.com/openfaas/templates-sdk/go-http\u0026#34; ) func Authorization(req handler.Request) error { authHeader := req.Header.Get(\u0026#34;Authorization\u0026#34;) authHeaderValues := strings.Split(authHeader, \u0026#34; \u0026#34;) if len(authHeaderValues) != 2 || authHeaderValues[0] != \u0026#34;Bearer\u0026#34; { return errors.New(\u0026#34;authorization header is in the wrong format\u0026#34;) } if authHeaderValues[1] != os.Getenv(\u0026#34;BEARER_TOKEN\u0026#34;) { return errors.New(\u0026#34;bearer token is not valid\u0026#34;) } return nil } Finishing up the handler for post-highscore. I will establish a database connection and check for the correct HTTP method. I will check for the authorization header. If there are no users with that username, we will create a new one and return that in the body. If there is someone with that username, we will check the incoming request\u0026rsquo;s highscore and compare it with the one that highscore that is persisted. If that is higher, we can go ahead and update then return that in the body. Otherwise, we return empty 200 to the request.\npost-highscore/handler.go\npackage function import ( \u0026#34;database/sql\u0026#34; \u0026#34;encoding/json\u0026#34; \u0026#34;fmt\u0026#34; \u0026#34;log\u0026#34; \u0026#34;net/http\u0026#34; \u0026#34;os\u0026#34; _ \u0026#34;github.com/lib/pq\u0026#34; \u0026#34;github.com/occamist/highscore-api/middleware\u0026#34; \u0026#34;github.com/occamist/highscore-api/model\u0026#34; \u0026#34;github.com/occamist/highscore-api/repository\u0026#34; handler \u0026#34;github.com/openfaas/templates-sdk/go-http\u0026#34; ) func Handle(req handler.Request) (handler.Response, error) { db, err := sql.Open(\u0026#34;postgres\u0026#34;, fmt.Sprintf(\u0026#34;host=%s port=%s user=%s dbname=%s sslmode=disable\u0026#34;, os.Getenv(\u0026#34;POSTGRES_HOST\u0026#34;), os.Getenv(\u0026#34;POSTGRES_PORT\u0026#34;), os.Getenv(\u0026#34;POSTGRES_USER\u0026#34;), os.Getenv(\u0026#34;POSTGRES_DB\u0026#34;))) defer func() { err = db.Close() if err != nil { log.Printf(\u0026#34;failed to close db: %v\u0026#34;, err) } }() if err != nil { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to connect to db: %v\u0026#34;, err) } if req.Method != http.MethodPost { return handler.Response{ StatusCode: http.StatusBadRequest, }, fmt.Errorf(\u0026#34;invalid http method %s\u0026#34;, req.Method) } err = middleware.Authorization(req) if err != nil { return handler.Response{ StatusCode: http.StatusBadRequest, }, fmt.Errorf(\u0026#34;%v\u0026#34;, err) } var highscore model.Highscore err = json.Unmarshal(req.Body, \u0026amp;highscore) if err != nil { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to unmarshal highscore\u0026#34;) } queries := repository.New(db) existingHighscore, err := queries.GetHighscore(req.Context(), highscore.Username) if err != nil \u0026amp;\u0026amp; err != sql.ErrNoRows { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to get a highscore: %v\u0026#34;, err) } if existingHighscore.ID == 0 { params := repository.CreateHighscoreParams{Username: highscore.Username, Score: highscore.Score} createdHighscore, err := queries.CreateHighscore(req.Context(), params) if err != nil { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to create a highscore: %v\u0026#34;, err) } raw, err := json.Marshal(createdHighscore) if err != nil { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to marshal created highscore\u0026#34;) } return handler.Response{ Body: []byte(raw), StatusCode: http.StatusOK, }, nil } if highscore.Score \u0026gt; existingHighscore.Score { params := repository.UpdateHighscoreParams{ID: existingHighscore.ID, Score: highscore.Score} updatedHighscore, err := queries.UpdateHighscore(req.Context(), params) if err != nil { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to update a highscore: %v\u0026#34;, err) } raw, err := json.Marshal(updatedHighscore) if err != nil { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to marshal updated highscore\u0026#34;) } return handler.Response{ Body: []byte(raw), StatusCode: http.StatusOK, }, nil } return handler.Response{ StatusCode: http.StatusOK, }, nil } Now I will create my third and final handler and respectively its docker hub repo. I will add a token credential to this handler as well. Because not everyone needs to delete someone else\u0026rsquo;s highscore :) your final yaml structure is given below.\nfaas-cli new --lang golang-http delete-highscore --append stack.yml version: 1.0 provider: name: openfaas gateway: http://23.88.60.124:8080 functions: get-highscores: lang: golang-http handler: ./get-highscores image: occamist/get-highscores:latest build_args: GO111MODULE: on environment: POSTGRES_HOST: 23.88.60.124 POSTGRES_PORT: 26257 POSTGRES_USER: root POSTGRES_DB: highscore_db post-highscore: lang: golang-http handler: ./post-highscore image: occamist/post-highscore:latest build_args: GO111MODULE: on environment: POSTGRES_HOST: 23.88.60.124 POSTGRES_PORT: 26257 POSTGRES_USER: root POSTGRES_DB: highscore_db BEARER_TOKEN: QeV5f7eSvJnO0dDYCc9DcH5BEwpm7P3j delete-highscore: lang: golang-http handler: ./delete-highscore image: occamist/delete-highscore:latest build_args: GO111MODULE: on environment: POSTGRES_HOST: 23.88.60.124 POSTGRES_PORT: 26257 POSTGRES_USER: root POSTGRES_DB: highscore_db BEARER_TOKEN: Ru4BXyL7ALkey34cUJIIXBF67t1qrw37 This handler will also handle its database connection and validate the authorization header then check the username in the URL query. Afterward, we delete the highscore that matches that username.\ndelete-highscore/handler.go\npackage function import ( \u0026#34;database/sql\u0026#34; \u0026#34;fmt\u0026#34; \u0026#34;log\u0026#34; \u0026#34;net/http\u0026#34; \u0026#34;net/url\u0026#34; \u0026#34;os\u0026#34; \u0026#34;strings\u0026#34; _ \u0026#34;github.com/lib/pq\u0026#34; \u0026#34;github.com/occamist/highscore-api/middleware\u0026#34; \u0026#34;github.com/occamist/highscore-api/repository\u0026#34; handler \u0026#34;github.com/openfaas/templates-sdk/go-http\u0026#34; ) func Handle(req handler.Request) (handler.Response, error) { db, err := sql.Open(\u0026#34;postgres\u0026#34;, fmt.Sprintf(\u0026#34;host=%s port=%s user=%s dbname=%s sslmode=disable\u0026#34;, os.Getenv(\u0026#34;POSTGRES_HOST\u0026#34;), os.Getenv(\u0026#34;POSTGRES_PORT\u0026#34;), os.Getenv(\u0026#34;POSTGRES_USER\u0026#34;), os.Getenv(\u0026#34;POSTGRES_DB\u0026#34;))) defer func() { err = db.Close() if err != nil { log.Printf(\u0026#34;failed to close db: %v\u0026#34;, err) } }() if err != nil { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to connect to db: %v\u0026#34;, err) } if req.Method != http.MethodDelete { return handler.Response{ StatusCode: http.StatusBadRequest, }, fmt.Errorf(\u0026#34;invalid http method %s\u0026#34;, req.Method) } err = middleware.Authorization(req) if err != nil { return handler.Response{ StatusCode: http.StatusBadRequest, }, fmt.Errorf(\u0026#34;%v\u0026#34;, err) } values, err := url.ParseQuery(req.QueryString) if err != nil { return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to parse query string: %v\u0026#34;, err) } queries := repository.New(db) username := values.Get(\u0026#34;username\u0026#34;) if strings.TrimSpace(username) != \u0026#34;\u0026#34; { err = queries.DeleteHighscore(req.Context(), username) if err != nil { if err == sql.ErrNoRows { return handler.Response{ StatusCode: http.StatusNotFound, }, nil } return handler.Response{ StatusCode: http.StatusInternalServerError, }, fmt.Errorf(\u0026#34;failed to delete a highscore for username %s: %v\u0026#34;, username, err) } } return handler.Response{ StatusCode: http.StatusOK, }, nil } Now we can do faas-cli up and see the deployed functions. You can also check out the dashboard to get the endpoint names. If you are getting internal server error 500, that means you are returning an error to the function handler and you can easily debug your server. For example, I am returning an error for invalid HTTP methods. I can easily see logs with this command\njournalctl -t openfaas-fn:get-highscores -r --lines 20 The end http://23.88.60.124:8080/function/get-highscores http://23.88.60.124:8080/function/post-highscore http://23.88.60.124:8080/function/delete-highscore These are the endpoints we have created. Overall, I enjoyed how we can have a serverless developer experience without the need for any giant cloud service that is impossible to move around. Faasd is still a young but promising project for developers who don\u0026rsquo;t want to deal with k8s infra complexity. Hope you enjoyed and learned something new. If you have any questions/issues, feel free to let me know. Take care!\n","permalink":"http://occamist.dev/posts/serverless-highscore-go-api-with-faasd-and-cockroachdb-part-two/","summary":"\u003ch2 id=\"the-intro\"\u003eThe Intro\u003c/h2\u003e\n\u003cp\u003e    Hi everyone, this is the 2nd part of the series, we will be developing our API in this part. I will assume you have already followed the previous part and setup faasd and CockroachDB in your cloud server instance and have faas-cli in your both client computer and cloud server instance. I will also assume you have Go on your computer and a proper text editor. Let\u0026rsquo;s quickly get started.\u003c/p\u003e","title":"[DEV PART 2/2] Serverless Highscore Go API with Faasd and CockroachDB"},{"content":"The Series Intro Hi everyone, in this series we will be creating serverless highscore REST API in Go and utilize the most advanced and bleeding-edge open-source technologies such as Faasd(OpenFaaS engine) and CockroachDB(our cluster database). Keep in mind that we will actually need a server to do serverless computing :) (Plot Twist)\nIn this 1st part, we will be setting up the infrastructure side for Hetzner Cloud with Terraform then in the 2nd part we will develop/deploy our functions with help of faas-cli. Faasd is developed by OpenFaaS to make self-hosted serverless functions much easier to develop/deploy without any vendor lock-in giant cloud company or K8s requirement. We will also be using CockroachDB as a single node database for our cloud server instance. There are some requirements but keep in mind that Terraform and Hetzner Cloud are not mandatory requirements.\nThe Aim The aim is to give everyone self-hosted basic serverless REST API understanding, the DevOps cycle of it and how to interact with the self-hosted distributed/resilient open-source database CockroachDB from serverless REST API.\nThe High-level Diagram Requirements:\nNon-windows based Terminal(Must have, could be WSL, Linux Terminal or Mac Terminal, just not Windows Powershell) Faas-cli in your client computer (Must have, we will be using this for pushing our code to our cloud server instance) CockroachDB (Must have, we will be using this as a database in our cloud server instance) Docker in your client computer (Must have, we will be using this for docker container registry) Terraform(optional) Hetzner Cloud account and API access token(optional) SSH key and key name created in Hetzner Cloud(optional) Why Hetzner Cloud? At least 1GB is required to be safe. So the pricing for CX11 and 2 gigs of RAM is really nice. You are free to use any cloud you like but I suggest small cloud providers instead of giant cloud providers due to the simplicity. Other alternatives could be Vultr, Linode, DigitalOcean\u0026hellip;\nWhy Faasd and CockroachDB? Faasd is open-source serverless container technology which uses an actual physical server and manages your functions in AWS lambda fashion with so little cost and being extremely lightweight. If you need global scale geo-distributed serverless container technology, you can also port your serverless functions to OpenFaaS(uses k8s under the hood) at any time. Faasd gets rid of any vendor-lock in serverless X cloud technology and allows you to focus only on your code. You also get a great dashboard.\nCockroachDB is a global scale geo-distributed open-source database. I will be using CockroachDB due to its simplicity and resilience to setup as a single node in our cloud server instance. If you need the managed production database, you can always switch to the free tier of cockroach cloud which gives you ultimate powers of geo-distributed feature and auto-TLS. Again, you also get a perfect dashboard.\nNote: In your faasd server, running a local database with docker can confuse containerd runtime. It is a must to install only faasd server via faasd github repository\u0026rsquo;s \u0026ldquo;./hack/install.sh\u0026rdquo; which will only install what is necessary such as containerd, faasd and faas-cli.\nFor demo purposes and purity, we won\u0026rsquo;t be adding TLS to our server and database but I will leave links at the end. You should never run this in production without TLS certificates and you will at least need 3 nodes for your production CockroachDB cluster.\n###Intro First of all, you need faas-cli in your client local machine. You should get the binary and set it to your path. If you are on linux or mac machine, moving the binary into \u0026ldquo;/usr/local/bin\u0026rdquo; will work. For windows machines, you need to set environment variables in Control Panel\u0026gt;System and Security\u0026gt;System\u0026gt;Advanced System Settings(single-binary-faas-cli)\nFor setting up faasd in your cloud server instance, easiest way to install faas-cli and faasd is to run commands below. Then visit the dashboard and login with your credentials on http://[[SERVER_IP]]:8080/ui/ Skip the manual commands below if you have Terraform.\ngit clone https://github.com/openfaas/faasd --depth=1 ./faasd/hack/install.sh sudo cat /var/lib/faasd/secrets/basic-auth-user; echo sudo cat /var/lib/faasd/secrets/basic-auth-password; echo Having Terraform and Hetzner Cloud is not a hard requirement. But if you are into Terraform and you have a Hetzner Cloud account, that\u0026rsquo;s great, you can run Terraform to provision your infrastructure faster and get a verbose output from your CLI. Just make sure to set your Hetzner api token and Hetzner ssh key name in vars.tf file.\ngit clone https://github.com/occamist/hetzner-terraform-faasd cd hetzner-terraform-faasd terraform init terraform plan terraform apply --auto-approve terraform output --json After you visit the URL and enter your username and password, you should be seeing the pretty OpenFaaS dashboard. If you are stuck with a problem, make sure faas-cli installed properly in your client and server by checking faas-cli version. You should also check if faasd.service is running on systemd.\nsystemctl status faasd.service After you SSH into your cloud instance, you can setup your single node CockroachDB. And check out the dashboard and the databases at http://[[SERVER_IP]]:7070.\ncurl https://binaries.cockroachdb.com/cockroach-v21.1.7.linux-amd64.tgz | tar -xz; sudo cp -i cockroach-v21.1.7.linux-amd64/cockroach /usr/local/bin/ cockroach start-single-node \\ --insecure \\ --listen-addr=0.0.0.0:26257 \\ --http-addr=0.0.0.0:7070 \\ --background \\ --accept-sql-without-tls Before we close cockroachDB, let\u0026rsquo;s create our schema and sample records.\ncockroach sql --insecure --host=localhost:26257 CREATE DATABASE highscore_db; USE highscore_db; SHOW TABLES; CREATE TABLE highscores ( id BIGSERIAL PRIMARY KEY, username TEXT NOT NULL UNIQUE, score BIGINT NOT NULL ); INSERT INTO highscores(username, score) VALUES (\u0026#39;SCORPION\u0026#39;, 100); INSERT INTO highscores(username, score) VALUES (\u0026#39;SUBZERO\u0026#39;, 100); INSERT INTO highscores(username, score) VALUES (\u0026#39;KITANA\u0026#39;, 300); INSERT INTO highscores(username, score) VALUES (\u0026#39;MILEENA\u0026#39;, 400); SELECT * FROM highscores; To quit, we can use this\ncockroach quit --insecure --host=localhost:26257 This sums up this infrastructure part. Let\u0026rsquo;s move to the actual development \u0026ldquo;it is actually so easy after you did it once\u0026rdquo; -The ancient old devops guy who died during the renewal of a manual TLS certificate\nThe end How to secure faasd server with caddy: tls-caddy-faasd-server How to secure CockroachDB as a whole cluster: -\u0026gt; tls-cockroachdb The references Faasd in-depth look: faasd-book CockroachDB in-depth look: cockroachdb-university ","permalink":"http://occamist.dev/posts/serverless-highscore-go-api-with-faasd-and-cockroachdb-part-one/","summary":"\u003ch2 id=\"the-series-intro\"\u003eThe Series Intro\u003c/h2\u003e\n\u003cp\u003e    Hi everyone, in this series we will be creating serverless highscore REST API in Go and utilize the most advanced and bleeding-edge open-source technologies such as Faasd(OpenFaaS engine) and CockroachDB(our cluster database). \u003cstrong\u003eKeep in mind that we will actually need a server to do serverless computing :)\u003c/strong\u003e \u003cem\u003e(Plot Twist)\u003c/em\u003e\u003c/p\u003e\n\u003cp\u003eIn this 1st part, we will be setting up the infrastructure side for Hetzner Cloud with Terraform then in the 2nd part we will develop/deploy our functions with help of faas-cli. Faasd is developed by OpenFaaS to make self-hosted serverless functions much easier to develop/deploy without any vendor lock-in giant cloud company or K8s requirement. We will also be using CockroachDB as a single node database for our cloud server instance. There are some requirements but keep in mind that Terraform and Hetzner Cloud are not mandatory requirements.\u003c/p\u003e","title":"[INFRA PART 1/2] Serverless Highscore Go API with Faasd and CockroachDB"},{"content":"The Intro Hello everyone, in this post I will be demonstrating how you can run localstack with Terraform and Docker and give you a proof of concept go application so you can tweak it according to your logic and follow anything you want to do such as integration/system tests for AWS services in your own CI/CD or localhost.\nGithub Repository for PoC(proof of concept): hotdog-PoC-repository\nRequirements:\nDocker docker-compose Terraform Go aws CLI A bit of lambda, dynamodb and kinesis knowledge Localstack is a testing/mocking framework for developing Cloud applications locally. Where in theory, you can stick any AWS service and emulate them in localhost without ever needing the real AWS account. Localstack’s primary goal to make integration/system testing less painful for developers.\nWhat was built? I built an imaginary hotdog food chain! (Note: No dogs were harmed in this process). Essentially PoC logic was I had 1 dogs dynamodb table which consist a dog model with 3 attributes ID, name, isAlive and isEaten. Then I had 3 lambdas dogCatcher, dogProcessor and hotDogDespatcher. dog catcher\u0026rsquo;s responsibility is to get alive dogs via external API requests(I generated data for simplicity) with unique IDs and different names. Dog processor\u0026rsquo;s responsibility is to kill the dogs and persist the data that was sent from dog catcher. Hot dog despatcher\u0026rsquo;s responsibility is to give processed dogs(hot dogs) to people and observe which ones were eaten via external API requests(I assumed hot dogs get eaten if their name has case-insensitive \u0026ldquo;e\u0026rdquo; or \u0026ldquo;a\u0026rdquo; letter)\nAside from lambdas, I had 3 kinesis streams and 3 kinesis triggers in order to make lambdas talk to each other. The named kinesis streams is as follows; caughtDogs, hotDogs, eatenHotDogs.\nStarting Localstack docker container with docker-compose version: \u0026#39;3.8\u0026#39; services: localstack: container_name: \u0026#34;localstack_main\u0026#34; image: localstack/localstack:latest environment: - SERVICES=dynamodb,lambda,kinesis - LAMBDA_EXECUTOR=docker_reuse - DOCKER_HOST=unix:///var/run/docker.sock - DEFAULT_REGION=ap-southeast-2 - DEBUG=1 - DATA_DIR=/tmp/localstack/data - PORT_WEB_UI=8080 - LAMBDA_DOCKER_NETWORK=localstack-tutorial - KINESIS_PROVIDER=kinesalite ports: - \u0026#34;53:53\u0026#34; - \u0026#34;53:53/udp\u0026#34; - \u0026#34;443:443\u0026#34; - \u0026#34;4566:4566\u0026#34; - \u0026#34;4571:4571\u0026#34; - \u0026#34;8080:8080\u0026#34; volumes: - /var/run/docker.sock:/var/run/docker.sock - localstack_data:/tmp/localstack/data networks: default: volumes: localstack_data: networks: default: external: name: localstack-tutorial docker-compose up -d --build Bootstrapping our infra with Terraform provider \u0026#34;aws\u0026#34; { region = \u0026#34;ap-southeast-2\u0026#34; access_key = \u0026#34;fake\u0026#34; secret_key = \u0026#34;fake\u0026#34; skip_credentials_validation = true skip_metadata_api_check = true skip_requesting_account_id = true endpoints { dynamodb = \u0026#34;http://localhost:4566\u0026#34; lambda = \u0026#34;http://localhost:4566\u0026#34; kinesis = \u0026#34;http://localhost:4566\u0026#34; } } // DYNAMODB TABLES resource \u0026#34;aws_dynamodb_table\u0026#34; \u0026#34;dogs\u0026#34; { name = \u0026#34;dogs\u0026#34; read_capacity = \u0026#34;20\u0026#34; write_capacity = \u0026#34;20\u0026#34; hash_key = \u0026#34;ID\u0026#34; attribute { name = \u0026#34;ID\u0026#34; type = \u0026#34;S\u0026#34; } } // KINESIS STREAMS resource \u0026#34;aws_kinesis_stream\u0026#34; \u0026#34;caught_dogs_stream\u0026#34; { name = \u0026#34;caughtDogs\u0026#34; shard_count = 1 retention_period = 30 shard_level_metrics = [ \u0026#34;IncomingBytes\u0026#34;, \u0026#34;OutgoingBytes\u0026#34;, ] } resource \u0026#34;aws_kinesis_stream\u0026#34; \u0026#34;hot_dogs_stream\u0026#34; { name = \u0026#34;hotDogs\u0026#34; shard_count = 1 retention_period = 30 shard_level_metrics = [ \u0026#34;IncomingBytes\u0026#34;, \u0026#34;OutgoingBytes\u0026#34;, ] } resource \u0026#34;aws_kinesis_stream\u0026#34; \u0026#34;eaten_hot_dogs_stream\u0026#34; { name=\u0026#34;eatenHotDogs\u0026#34; shard_count = 1 retention_period = 30 shard_level_metrics = [ \u0026#34;IncomingBytes\u0026#34;, \u0026#34;OutgoingBytes\u0026#34;, ] } // LAMBDA FUNCTIONS resource \u0026#34;aws_lambda_function\u0026#34; \u0026#34;dog_catcher_lambda\u0026#34; { function_name = \u0026#34;dogCatcher\u0026#34; filename = \u0026#34;dogCatcher.zip\u0026#34; handler = \u0026#34;main\u0026#34; role = \u0026#34;fake_role\u0026#34; runtime = \u0026#34;go1.x\u0026#34; timeout = 5 memory_size = 128 } resource \u0026#34;aws_lambda_function\u0026#34; \u0026#34;dog_processor_lambda\u0026#34; { function_name = \u0026#34;dogProcessor\u0026#34; filename = \u0026#34;dogProcessor.zip\u0026#34; handler = \u0026#34;main\u0026#34; role = \u0026#34;fake_role\u0026#34; runtime = \u0026#34;go1.x\u0026#34; timeout = 5 memory_size = 128 } resource \u0026#34;aws_lambda_function\u0026#34; \u0026#34;hot_dog_despatcher_lambda\u0026#34; { function_name = \u0026#34;hotDogDespatcher\u0026#34; filename = \u0026#34;hotDogDespatcher.zip\u0026#34; handler = \u0026#34;main\u0026#34; role = \u0026#34;fake_role\u0026#34; runtime = \u0026#34;go1.x\u0026#34; timeout = 5 memory_size = 128 } // LAMBDA TRIGGERS resource \u0026#34;aws_lambda_event_source_mapping\u0026#34; \u0026#34;dog_processor_trigger\u0026#34; { event_source_arn = aws_kinesis_stream.caught_dogs_stream.arn function_name = \u0026#34;dogProcessor\u0026#34; batch_size = 1 starting_position = \u0026#34;LATEST\u0026#34; enabled = true maximum_record_age_in_seconds = 604800 } resource \u0026#34;aws_lambda_event_source_mapping\u0026#34; \u0026#34;dog_processor_trigger_2\u0026#34; { event_source_arn = aws_kinesis_stream.eaten_hot_dogs_stream.arn function_name = \u0026#34;dogProcessor\u0026#34; batch_size = 1 starting_position = \u0026#34;LATEST\u0026#34; enabled = true maximum_record_age_in_seconds = 604800 } resource \u0026#34;aws_lambda_event_source_mapping\u0026#34; \u0026#34;hot_dog_despatcher_trigger\u0026#34; { event_source_arn = aws_kinesis_stream.hot_dogs_stream.arn function_name = \u0026#34;hotDogDespatcher\u0026#34; batch_size = 1 starting_position = \u0026#34;LATEST\u0026#34; enabled = true maximum_record_age_in_seconds = 604800 } ./zip-it.sh terraform init terraform plan terraform apply --auto-approve Checking with aws CLI if everything is setup correctly To see if everything was working correctly, I invoke dogCatcher and check out the dynamodb table;\naws lambda invoke --function-name dogCatcher --endpoint-url=http://localhost:4566 --payload \u0026#39;{\u0026#34;quantity\u0026#34;: 2}\u0026#39; output.txt aws dynamodb scan --endpoint-url http://localhost:4566 --table-name dogs Result I had pretty much great experience with Localstack. I think even though Localstack is quite new, it seems like it can be used for learning AWS SDKs as a developer without actually using live AWS services and getting billed for it. This can also speed up developer\u0026rsquo;s integration tests(along with CI/CD) and debugging processes if configured properly because there are many services Localstack provides and I have only configured and used 3 of them here. This also saves lots of costs for any companies.\nAlso don\u0026rsquo;t forget to check out Localstack\u0026rsquo;s slack channel, they are really helpful for any issues you run into or for further questions!\nlocalstack-community.slack ","permalink":"http://occamist.dev/posts/localstack-with-terraform-and-docker-for-running-aws-locally/","summary":"\u003ch2 id=\"the-intro\"\u003eThe Intro\u003c/h2\u003e\n\u003cp\u003e    Hello everyone, in this post I will be demonstrating how you can run localstack with Terraform and Docker and give you a proof of concept go application so you can tweak it according to your logic and follow anything you want to do such as integration/system tests for AWS services in your own CI/CD or localhost.\u003c/p\u003e\n\u003cp\u003eGithub Repository for PoC(proof of concept):\n\u003ca href=\"https://github.com/occamist/hotdog-localstack-PoC\"\u003ehotdog-PoC-repository\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eRequirements:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eDocker\u003c/li\u003e\n\u003cli\u003edocker-compose\u003c/li\u003e\n\u003cli\u003eTerraform\u003c/li\u003e\n\u003cli\u003eGo\u003c/li\u003e\n\u003cli\u003eaws CLI\u003c/li\u003e\n\u003cli\u003eA bit of lambda, dynamodb and kinesis knowledge\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eLocalstack is a testing/mocking framework for developing Cloud applications locally. Where in theory, you can stick any AWS service and emulate them in localhost without ever needing the real AWS account.\nLocalstack’s primary goal to make integration/system testing less painful for developers.\u003c/p\u003e","title":"Localstack with Terraform and Docker for running AWS locally"}]