Dev Environments for anything

The dream

The dream of modifying your local machine configuration based on whatever it is that you are doing is here. No matter if you are changing the Linux Kernel or just writing a small reminder document, nix has got you covered. The only thing you need to do is nix develop ENV_NAME and you are ready to go.

Flake first

I will not give a master class on what flakes are and how to use them; because I do not know yet :). But what I will do is to give an explanation on how I am using them to orchestrate the going in and coming out of specific shell environments. Let me try to explain it through specific dummy example

{
    # Be sure to have some string that describes your flake
    description = "dummy flake";

    # This is where you define all the dependencies
    inputs = {

        # Bring in the packages from nix official source
        nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.05";

        # Also bring in some special personal dependencies of your own
        dep0.url = "github:GithubUser/nix_envs?dir=dep0";
    };

    # This is where you define what the flake will provide
    outputs = { self, nixpkgs, dep0, ... }:
        let

            # Be sure to define it for your system architecture
            pkgs = import nixpkgs { system = "x86_64-linux"; };
            system = "x86_64-linux";

        in {

            # This is where you tell it to what you want available in your
            # development shell
            devShells.${system}.default = pkgs.mkShell {
                # Include a list of packages from both nixos and your own dep0
                packages = with pkgs [ git ]
                    ++ dep0.devShells.${system}.default.shellPkgs ;

                # Execute any special commands. You can even use the dep0 ones
                shellHook = '' echo "Entering dummy environment" ''
                    + dep0.devShells.${system}.default.shellHook
                ;
            };
        };
}

Dependencies

The URL lines in the inputs section point to a source; they don’t necessarily need to be from github. You have everything that is provided by these “packages” available in your flake. Two things to note: 1. A flake.nix file is expected to reside on the root path of the pointed repository. 2. You can point to a directory within the repo (instead of just the root) with the “?dir=/PATH” suffix.

Shell Environment

I use devShells to do two things 1. Define a set of packages that are available when I enter the environment 2. Define a set of actions that are run when I enter the environment Worth noting here is that you can actually use elements from the “packages” that you have in the input section.

Public is best

When you are using github repositories it has only worked for me when they are public.

Run Flake

You usually put your dummy flake.nix under a directory; let us say it is under “dummy”. Then, to run the flake, you simply do nix develop /path/to/dummy. Then you packages will be available and your shell script defined in the hook will be run. Your environment is now personalized!!! And to get out of it, you simply do an exit and you are back to your base system.

Register Flake

Typing nix develop /PATH/TO/dummy might be a bit too much. Especially when you want to quickly go in and out of these environments. So I do two things: 1. Use an alias -> "alias nd=“nix develop” 2. Register the flake environments locally. Which requires you to copy a list of all your beautiful flakes in the form of a registry template [1] into ~/.config/nix/registry.json.

Self-referencing Nix Env Repo

I like to keep things in one place, so I added everything to one repository [2] and use different directories for different environments. I reference these with the “?dir=PATH” trick.

A word on Locking

A flake.lock file gets generated when I run nix develop. This has the -very enjoyable- consequence of locking the “version”; its more like a git reference hash. Which means that no matter how much I change [2], I will always have the same version of my environment (until I regenerate my register file).

Conclusion

In conclusion, leveraging Nix flakes to manage development environments offers a
powerful and flexible approach to configuring your environments. By structuring
dependencies, defining shell environments, and utilizing registry files, you can
seamlessly switch between customized environments with minimal effort. Whether
you’re modifying the Linux kernel or working on small projects, Nix provides a
consistent and reproducible environment, ensuring that your development process
remains efficient and reliable.

[1] https://raw.githubusercontent.com/Joelgranados/nix_envs/refs/heads/master/registry.tmpl [2] https://github.com/Joelgranados/nix_envs

Posted in Uncategorized | Tagged , , , , , , , , , , | Leave a comment

A Way Of Managing Mailing Lists with LEI: Streamlining My Inbox

A new way to read Public-Inbox

Some time ago I canceled all my mailing list subscriptions and changed the way I consumed information contained therein that was not directly sent to me. This post tries to describe where I landed after that change.

Why change?

While trying to stay up to date with mailing lists, I received what seemed to me as an unmanageable amount of mail that just ended up unread and gathering dust in my inbox. When I finally did read them; they had already become irrelevant. This “new way” of reading lists is just me trying to avoid all the crud that I don’t read anyway.

Unsubscribe, Unsubscribe, Unsubscribe!!!

It is just too much to actually read; life is too short to start a 100 e-mail thread only to realize half way through that it is not interesting. Just let it all go and stop expecting to find the relevant bits by reading it all. This is specially relevant in high traffic mailing lists that have a high noise to information ratio.

But don’t stop reading

So how do you keep up to date? I’m currently experimenting with a few LEI queries that (I believe) give me a decent subset of what I need to be reading. The following are a list of the ones that I keep coming back to and a small explanations of the arguments and search strings. I use this as ad-hock queries that I run when I need them, instead of something that comes into my inbox regularly.

Query Base

lei q \
    -o ~/Mail/[lei-staging|lists] \
    -I https://lore.kernel.org/all \
    --dedup=mid \
    -t \
    --no-save \
    [-a] \
    "(rt:1.month.ago..)"

This is the part of the query that stays more or less the same. With a few changes depending on where the query results end up:

  1. -o ~/Mail/[lei-staging|lists]: This is the directory where the query results will end up. I have setup two:

    • lei-staging : It servers to simply look at the current query. When I use this directory, I do not use the -a argument; this means that every time I execute a query the Mail Dir that is created replaces the previous one. I make sure not to track this directory within my mail reader (neomutt) nor my mail indexer (notmuch)

    • lists : This one servers to follow the interesting bits that I have found in my initial lei-staging query searches. I change the output dir to lists once I know that I’m intersted and I have a very specific query that returns just one thread (this usually means the subject string)

  2. -I https://lore.kernel.org/all I always search all lore because as I have found that the search patterns that I usually use give a small enough result that I can assess without too much fuss.

  3. --dedup=mid There is always a chance to have duplicates and I want LEI to de-duplicate them by looking at the mail identification (mid).

  4. -t I’m interested in threads, single e-mails usually miss the relevant context that only the thread can provide.

  5. --no-save By default LEI saves the query in the ‘-o’ directory. I do not need this as I’m not running lei up.

  6. [-a] When I’m doing a lei-staging query, I do not use append

  7. "rt:1.month.ago.." The lei-q command is designed in such a way that each individual search “TERM” are anded. My first search term is always a time range because I’m usually not interested in information that is very old. Here are some aspects that I consider when using the “rt” search term:

    • I run most of my lei queries weekly, so I use “rt:1.week.ago…” or “rt:2.week.ago…” if I see that I have not done it in a while. I don’t go over two weeks; my thought being that if that mail that I missed 3 weeks ago has not made it into my inbox, it was not that important to begin with. Remember, all this is for contextual information.

    • I sometimes run historical searches. For these I broaden my search range to something like “rt:1.year.ago…” and add on to that as I move forward. If I find that I don’t get lots of hits for a 1 year range, I try 5, 10 or simply remove the “rt” search term altogether.

    • Very seldom I just want to get a mail that was just sent. In these cases I use “rt:1.day.ago…”

Stalking People

lei q \
    -o ~/Mail/lei-staging \
    -I https://lore.kernel.org/all \
    --dedup=mid \
    -t \
    --no-save \
    "(rt:1.week.ago..)" \
    "f:torvalds"

There are several reasons in my case that I focus on mails sent from a certain person:

  1. Certain people publish relevant (for what I’m doing) information to the lists. For example: I try to read mails from Linus when it gets close to the merge window to figure out when it is happening and if there are any special considerations.

  2. When working on a patch set with other developers, I find that looking at their discussions adds to the my situational awareness.

Files that are interesting

lei q \
    -o ~/Mail/lei-staging \
    -I https://lore.kernel.org/all \
    --dedup=mid \
    -t \
    --no-save \
    "(rt:1.month.ago..)" \
    "dfn:proc_sysctl.c"

There are two reasons I search for discussions/patchsets based on files:

  1. Getting to know if people are working on the same file that I’m touching and what they are doing can help me avoid adding merge conflicts. It also helps in knowing when to coordinate with other developers.

  2. I want to know if I have missed anything relevant to what I maintain. These are the cases where someone sent something to the list, but forgot to send it to me.

Keep what is interesting updated

I put the mails that I find relevant/interesting in a “lists” directory. However, since I am not subscribed to the list, I do not get any subsequent mails to the thread. So how to get the mails that are sent after I have made my query?

I have a script that does that for me. It uses notmuch to find all the thread subjects inside my “lists” directory and then runs a lei query that updates each one. This script runs every time I sync may mail. And on the off chance that I don’t want to read any more updates from a thread I just remove it from the “lists” directory.

#!/usr/bin/env python3
import subprocess
import os.path
import textwrap

mail_path = "~/Mail/fastmail"
sync_dir_name="lists"
notmuch_cmd = "notmuch search --format=json folder:{} | jq -r '.[].subject'".format(sync_dir_name)

subjects = subprocess.run(notmuch_cmd, shell=True, capture_output=True).stdout.splitlines()

lei_output = os.path.join(mail_path, sync_dir_name)
lei_I = "https://lore.kernel.org/all"
lei_time_range = "\"(rt:1.year.ago..)\""
lei_base_cmd = "lei q -v -o {} -I {} --dedup=mid -t --no-save -a {}" \
  .format(lei_output, lei_I, lei_time_range)

print("Detected {} thread(s) in {}".format(len(subjects), lei_output))
for i in range(0,len(subjects)):
  s = subjects[i].decode('utf-8')
  if len(s)  1:
    print (textwrap.indent(lei_cmd_output, '    '))

new_cmd = ["notmuch", "new"]
print ("Executing {}".format(new_cmd))
new_cmd_out = str(subprocess.run(new_cmd, capture_output=True).stdout)
print ("{}".format(new_cmd_out))

Conclusion

Simplifying your inbox does not mean losing touch. LEI helps you stay informed without overwhelming your inbox. Try these queries, and share your own tips in the comments!

Posted in Productivity | Tagged , , , , , , , | Leave a comment

Mutt. Delete Duplicate Mails.

I just have to say this: Mutt is awesome!!!

Its just as easy as

  1. Press “T” to activate pattern tagging
  2. Input “~=<enter>” To tag all duplicated mails
  3. Press “;” to input an action
  4. Input “d” do delete tagged.

And that is it!

Posted in Uncategorized | Leave a comment

Bootstrapping Debian package

Debian packages depend on a “Debian” directory. While you could create this by hand, there is a faster way: you can use dh_make. Here is what I used to get started:

create_debian_directory()
{
pushd $1
dh_make -y -s -e ${MAINTAINER_EMAIL} -f ../package-0.0.21.orig.tar.gz

cat > debian/rules <<EOF
#!/usr/bin/make -f
%:
dh \$@

override_dh_auto_configure:
./configure

override_dh_usrlocal:
echo “Skipping dh_usrlocal”

override_dh_shlibdeps:
echo “Skipping dh_shlibdeps”

EOF

sed -e “s/^Build-Depends: /Build-Depends: ${BUILD_DEPENDS}, /” -i debian/control
popd

Things to what’d out for :

  • You have to have the tar.gz file named a certain WAY.
  • The debian/control and debian/rules are what controls everything
  • A good resource is Here
Posted in Uncategorized | Tagged , , , , | Leave a comment

icecc needs not to be in “native” mode

RocksDB was taking a bit too much time to compile so I decided to try out icecc. When I executed it I noticed on the icecream-sundae monitor that my “second” machine was not being used. After a lengthy debugging I realized that RocksDB compiles with -march=native, which in turn forces icecc to compile local only. The solution to this was simple enough:

PORTABLE=1 make -j${BIG_NUMBER}

Hope this helps whoever is trying to use icecc as a cluster for building

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Add included files to your ctags file

To add the headers to the ctags file I found this little gem. And I ended up using it like this:

#!/bin/bash
if [ ! -z $1 ];then
  root_path=$1
else
  root_path="."
fi

gcc -M ${root_path}/* 2> /dev/null | \
  sed -e 's/^ //' -e 's/ \\$//g' | \
  sed -e '/^$/d' -e '/\.o:[ \t]*$/d' | \
  sed -e 's/^.*\.o: //' -e 's/ /\n/g' | \
  cat - <(ls -d ${root_path}/*) | \
  ctags-universal -R -L - --c-kinds=+p --fields=+iaS --extras=+q


# -e 's/^ //'         -> Remove the first space
# -e 's/ \\&//g       -> Remove the training '\'
# -e '/^$/d'          -> Remove empty lines
# -e '/\.o:[ \t]*$/d' -> Remove the lines that have object file paths
# -e 's/^.*\.o: //'   -> Remove the object path from the line
# -e 's/ /\n/g' | \   -> Separate several source files with a return
# cat - <(ls -d ${root_path}/*) -> Append the root_path files

I’m still missing the implementation of the header files though. for another day…

Posted in Uncategorized | Tagged , , , , | Leave a comment

Deoplete works well for latest vim

Neocomplete does not work with my current version of VIM. so I had to install an alternative. Deoplete is writen by Shougo and does the simple autocompletion that I expected to start with. Easy install with Bundles https://github.com/Shougo/deoplete.nvim

Posted in Uncategorized | Leave a comment

GPG links/tutorials/FAQ

Will not write yet another tutorial about how to setup GPG. I will link to the ones that I used to do several things.

Settup up GPG

https://oguya.ch/posts/2016-04-01-gpg-subkeys/

https://blog.tinned-software.net/create-gnupg-key-with-sub-keys-to-sign-encrypt-authenticate/

Setting up subkeys

https://oguya.ch/posts/2016-04-01-gpg-subkeys/

https://wiki.debian.org/Subkeys

Setting up GPG for SSH authentication

https://gregrs-uk.github.io/2018-08-06/gpg-key-ssh-mac-debian/

https://ryanlue.com/posts/2017-06-29-gpg-for-ssh-auth

https://opensource.com/article/19/4/gpg-subkeys-ssh

FAQ

https://security.stackexchange.com/questions/206072/how-do-i-interpret-output-that-produces-gpg-listing-keys

https://superuser.com/questions/1371088/what-do-ssb-and-sec-mean-in-gpgs-output

Posted in Uncategorized | Tagged , , , | Leave a comment

GPG does not output the key IDs by default

Wanted to start signing my mails and my git commits with my newly minted sub key. But when I did a

gpg2 --list-keys

I could not see the key ids to add to my git nor to my muttrc. After some searching around I found this nifty gpg argument that gave me what I wanted.

gpg2 --keyid-format LONG --list-keys

The LONG stands for the 16 character ID. And alternatively you can also define short for the 8 character one.

Posted in Uncategorized | Tagged , , , | Leave a comment

Always remember the clockwise/spiral rule

From time to time I need to go back and see how to “read” the const pointer order when declaring in C++. Here is a good reminder/link for it.

And try to always append a const in order for it to be less confusing.

Posted in Uncategorized | Leave a comment