NetBSD Planet

January 21, 2020 New Security Advisory: NetBSD-SA2020-001

January 19, 2020

Kimmo Suominen Patch for HTTP Request Smuggling

I added a patch to www/nginx from an upstream commit to address CVE-2019-20372. Version 1.16.1nb2 includes the patch.

January 18, 2020

DragonFly BSD Digest In Other BSDs for 2019/01/18

Unofficial theme: conventions.  There’s lots of options this year; you should go.  If you are reading this, you’re the right demographic to enjoy one.

January 15, 2020

Roy Marples Anonymity Profiles for DHCP Clients aka RFC 7844

DHCP clients by default send a fair chunk of data which can identify you to the local DHCP server. In return they provide you with a stable IP address and configuration parameters.

At a bare minimum, the hardware address of the interface is sent - this is required to work.

So, how to solve this dilema of wanting total anonymity? The answer is to randomise the hardware address. This will happen when the carrier is down OR dhcpcd starts with the interface down. Then, dhcpcd will use this random hardware address to set a DUID LL which will be used inplace of any saved DUID and set the IAID to all zeros. This combo is used by DHCP and DHCPv6 to identify a lease. As this is randomised each time the carrier comes up you get a different IP address!

Try not to use this on an unstable link as it could drain the DHCP server of addresses :(

But we can't stop there! dhcpcd also sends some identifying options as well! For example, this is sent in the vendor class identifier:

It does not identify you or the device in anyway, but it does say what software is being used on which hardware. This could be used by DHCP servers to hand out a specific image to download and boot from TFTP for network boot clients.

Now, there are a gazzillion and one DHCP options out there - we don't know what you've configured. So dhcpcd will mask all of them when anonymous mode is activated, unless they are essential for enabling dhcpcd to work correctly on the network. But wait! What if you really want to leak something? Like say your on a corporate network that uses DHCP security and still want to remain anonymous? Well you can! Any request or option after the anonymous option in dhcpcd.conf is turned on. So the placing of the anonymous directive is important, unlike other dhcpcd options. So far this is the only implementation of RFC 7844 which does this :)

This is NOT enabled by default because most people want stable addresses AND a flappy link could drain addresses as disussed earlier.

January 14, 2020

DragonFly BSD Digest Learning 2 things

Sometimes you get 2 nice tips: I like seeing this NetBSD->FreeBSD->DragonFly cross pollination in this commit, and also now I know I can fsck a FAT volume on BSD.

3rd bonus: that last sentence sounds terribly rude.

January 13, 2020

NetBSD Blog Improving the ptrace(2) API and preparing for LLVM-10.0
This month I have improved the NetBSD ptrace(2) API, removing one legacy interface with a few flaws and replacing it with two new calls with new features, and removing technical debt.

As LLVM 10.0 is branching now soon (Jan 15th 2020), I worked on proper support of the LLVM features for NetBSD 9.0 (today RC1) and NetBSD HEAD (future 10.0).

ptrace(2) API changes

There are around 20 Machine Independent ptrace(2) calls. The origin of some of these calls trace back to BSD4.3. The PT_LWPINFO call was introduced in 2003 and was loosely inspired by a similar interface in HP-UX ttrace(2). As that was the early in the history of POSIX threads and SMP support, not every bit of the interface remained ideal for the current computing needs.

The PT_LWPINFO call was originally intended to retrieve the thread (LWP) information inside a traced process.

This call was designed to work as an iterator over threads to retrieve the LWP id + event information. The event information is received in a raw format (PL_EVENT_NONE, PL_EVENT_SIGNAL, PL_EVENT_SUSPENDED).


1. PT_LWPINFO shares the operation name with PT_LWPINFO from FreeBSD that works differently and is used for different purposes:

2. pl_event can only return whether a signal was emitted to all threads or a single one. There is no information whether this is a per-LWP signal or per-PROC signal, no siginfo_t information is attached etc.

3. Syncing our behavior with FreeBSD would mean complete breakage of our PT_LWPINFO users and it is actually unnecessary, as we receive full siginfo_t through Linux-like PT_GET_SIGINFO, instead of reimplementing siginfo_t inside ptrace_lwpinfo in FreeBSD-style. (FreeBSD wanted to follow NetBSD and adopt some of our APIs in ptrace(2) and signals.).

4. Our PT_LWPINFO is unable to list LWP ids in a traced process.

5. The PT_LWPINFO semantics cannot be used in core files as-is (as our PT_LPWINFO returns next LWP, not the indicated one) and pl_event is redundant with netbsd_elfcore_procinfo.cpi_siglwp, and still less powerful (as it cannot distinguish between a per-LWP and a per-PROC signal in a single-threaded application).

6. PT_LWPINFO is already documented in the BUGS section of ptrace(2), as it contains additional flaws.


1. Remove PT_LWPINFO from the public ptrace(2) API, keeping it only as a hidden namespaced symbol for legacy compatibility.

2. Introduce the PT_LWPSTATUS that prompts the kernel about exact thread and retrieves useful information about LWP.

3. Introduce PT_LWPNEXT with the iteration semantics from PT_LWPINFO, namely return the next LWP.

4. Include per-LWP information in core(5) files as "[email protected]".

5. Fix flattening the signal context in netbsd_elfcore_procinfo in core(5) files, and move per-LWP signal information to the per-LWP structure "[email protected]".

6. Do not bother with FreeBSD like PT_GETNUMLWPS + PT_GETLWPLIST calls, as this is a micro-optimization. We intend to retrieve the list of threads once on attach/exec and later trace them through the LWP events (PTRACE_LWP_CREATE, PTRACE_LWP_EXIT). It's more important to keep compatibility with current usage of PT_LWPINFO.

7. Keep the existing ATF tests for PT_LWPINFO to avoid rot.

PT_LWPSTATUS and PT_LWPNEXT operate over newly introduced "struct ptrace_lwpstatus". This structure is inspired by: - SmartOS lwpstatus_t, - struct ptrace_lwpinfo from NetBSD, - struct ptrace_lwpinfo from FreeBSD

and their usage in real existing open-source software.

#define PL_LNAMELEN 20 /* extra 4 for alignment */

struct ptrace_lwpstatus {
 lwpid_t  pl_lwpid;  /* LWP described */
 sigset_t pl_sigpend;  /* LWP signals pending */
 sigset_t pl_sigmask;  /* LWP signal mask */
 char  pl_name[PL_LNAMELEN]; /* LWP name, may be empty */
 void  *pl_private;  /* LWP private data */
 /* Add fields at the end */

I have decided to avoid a writable version of PT_LWPSTATUS that rewrites signals, name, or private pointer. These options are practically unused in existing open-source software. There are two exceptions that I am familiar with, but both are specific to kludges overusing ptrace(2). If these operations are needed, they can be implemented without a writable version of PT_LWPSTATUS, patching tracee's code.

I have switched GDB (in base), LLDB, picotrace and sanitizers to the new API. As NetBSD 9.0 is nearing release, this API change will land NetBSD 10.0 and existing ptrace(2) software will use PT_LWPINFO for now.

New interfaces are ensured to be stable and continuously verified by the ATF infrastructure.


In the early in the history of libpthread, the NetBSD developers designed and programmed a libpthread_dbg library. It's use-case was initially intended to handle user-space scheduling of threads in the M:N threading model inspired by Solaris.

After the switch of the internals to new SMP design (1:1 model) by Andrew Doran, this library lost its purpose and was no longer used (except being linked for some time in a local base system GDB version). I removed the libpthread_dbg when I modernized the ptrace(2) API, as it no longer had any use (and it was broken in several ways for years without being noticed).

As I have introduced the PT_LWPSTATUS call, I have decided to verify this interface in a fancy way. I have mapped ptrace_lwpstatus::pl_private into the tls_base structure as it is defined in the sys/tls.h header:

struct tls_tcb {   
        void    **tcb_dtv;
        void    *tcb_pthread;
        void    *tcb_self;
        void    **tcb_dtv;
        void    *tcb_pthread;

The pl_private pointer is in fact a pointer to a structure in debugger's address space, pointing to a tls_tcl structure. This is not true universally in every environment, but it is true in regular programs using the ELF loader and the libpthread library. Now, with the tcb_pthread field we can reference a regular C-style pthread_t object. Now, wrapping it into a real tracer, I have implemented a program that can either start a debuggee or attach to a process and on demand (as a SIGINFO handler, usually triggered in the BSD environment with ctrl-t) dump the full state of pthread_t objects within a process. A part of the example usage is below:

$ ./pthreadtracer -p `pgrep nslookup` 
[ 21088.9252645] load: 2.83  cmd: pthreadtracer 6404 [wait parked] 0.00u 0.00s 0% 1600k
DTV=0x7f7ff7ee70c8 TCB_PTHREAD=0x7f7ff7e94000
LID=4 NAME='sock-0' TLS_TSD=0x7f7ff7eed890
pt_self = 0x7f7ff7e94000
pt_tls = 0x7f7ff7eed890
pt_magic = 0x11110001 (= PT_MAGIC=0x11110001)
pt_state = 1
pt_lock = 0x0
pt_flags = 0
pt_cancel = 0
pt_errno = 35
pt_stack = {.ss_sp = 0x7f7fef9e0000, ss_size = 4194304, ss_flags = 0}
pt_stack_allocated = YES
pt_guardsize = 65536

Full log is stored here. The source code of this program, on top of picotrace is here.

The problem with this utility is that it requires libpthread sources available and reachable by the build rules. pthreadtracer reaches each field of pthread_t knowing its exact internal structure. This is enough for validation of PT_LWPSTATUS, but is it enough for shipping it to users and finding its real world use-case? Debuggers (GDB, LLDB) using debug information can reach the same data with DWARF, but supporting DWARF in pthreadtracer is currently harder than it ought to be for the interface tests. There is also an option to revive at some point libpthread_dbg(3), revamping it for modern libpthread(3), this would help avoid DWARF introspection and it could find some use in self-introspection programs, but are there any?


I keep searching for a solution to properly support lld (LLVM linker).

NetBSD's major issue with LLVM lld is the lack of standalone linker support, therefore being a real GNU ld replacement. I was forced to publish a standalone wrapper for lld, called lld-standalone and host it on GitHub for the time being, at least until we will sort out the talks with LLVM developers.

LLVM sanitizers

As the NetBSD code is evolving, there is a need to support multiple kernel versions starting from 9.0 with the LLVM sanitizers. I have introduced the following changes:

The purpose of these changes is as follows:

There is still one portability issue in the sanitizers, as we hard-code the offset of the link_map field within the internal dlopen handle pointer. The dlopen handler is internal to the ELF loader object of type Obj_Entry. This type is not available to third party code and it is not stable. It also has a different layout depending on the CPU architecture. The same problem exists for at least FreeBSD, and to some extent to Linux. I have prepared a patch that utilizes the dlinfo(3) call with option RTLD_DI_LINKMAP. Unfortunately there is a regression with MSan on NetBSD HEAD (it works on 9.0rc1) that makes it harder for me to finalize the patch. I suspect that after the switch to GCC 8, there is now incompatible behavior that causes a recursive call sequence: _Unwind_Backtrace() calling _Unwind_Find_FDE(), calling search_object, and triggering the __interceptor_malloc interceptor again, which calls _Unwind_Backtrace(), resulting in deadlock. The offending code is located in src/external/gpl3/gcc/dist/libgcc/unwind-dw2-fde.c and needs proper investigation. A quick workaround to stop recursive stack unwinding unfortunately did not work, as there is another (related?) problem:

==4629==MemorySanitizer CHECK failed:
/public/llvm-project/llvm/projects/compiler-rt/lib/msan/msan_origin.h:104 "((stack_id)) != (0)" (0x0, 0x0)

This shows that this low-level code is very sensitive to slight changes, and needs maintenance power. We keep improving the coverage of tested scenarios on the LLVM buildbot, and we enabled sanitizer tests on 9.0 NetBSD/amd64; however we could make use of more manpower in order to reach full Linux parity in the toolchain.

Other changes

As my project in LLVM and ptrace(2) is slowly concluding, I'm trying to finalize the related tasks that were left behind.

I've finished researching why we couldn't use syscall restart on kevent(2) call in LLDB and improved the system documentation on it. I have also fixed small nits in the NetBSD wiki page on kevent(2).

I have updated the list of ELF defines for CPUs and OS ABIs in sys/exec_elf.h.

Plan for the next milestone

Port remaining ptrace(2) test scenarios from Linux, FreeBSD and OpenBSD to ATF and ensure that they are properly operational.

This work was sponsored by The NetBSD Foundation.

The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue funding projects and services to the open-source community. Please consider visiting the following URL to chip in what you can:

NetBSD Blog Working towards LLDB on i386

Upstream describes LLDB as a next generation, high-performance debugger. It is built on top of LLVM/Clang toolchain, and features great integration with it. At the moment, it primarily supports debugging C, C++ and ObjC code, and there is interest in extending it to more languages.

In February 2019, I have started working on LLDB, as contracted by the NetBSD Foundation. So far I've been working on reenabling continuous integration, squashing bugs, improving NetBSD core file support, extending NetBSD's ptrace interface to cover more register types and fix compat32 issues, fixing watchpoint and threading support.

Throughout December I've continued working on our build bot maintenance, in particular enabling compiler-rt tests. I've revived and finished my old patch for extended register state (XState) in core dumps. I've started working on bringing proper i386 support to LLDB.

Generic LLVM updates

Enabling and fixing more test suites

In my last report, I've indicated that I've started fixing test suite regressions and enabling additional test suites belonging to compiler-rt. So far I've been able to enable the following suites:

In case someone's wondering how different memory-related sanitizers differ, here's a short answer: ASAN covers major errors that can be detected with approximate 2x slowdown (out-of-bounds accesses, use-after-free, double-free...), LSAN focuses on memory leaks and has almost no overhead, while MSAN detects unitialized reads with 3x slowdown.

The following test suites were skipped because of major known breakage, pending investigation:

The changes done to improve test suite status are:

Repeating the rationale for disabling ASLR/MPROTECT from my previous report: the sanitizers or tools in question do not work with the listed hardening features by design, and we explicitly make them fail. We are using paxctl to disable the relevant feature per-executable, and this makes it possible to run the relevant tests on systems where ASLR and MPROTECT are enabled globally.

This also included two builtin tests. In case of clear_cache_test.c, this is a problem with test itself and I have submitted a better MPROTECT support for clear_cache_test already. In case of enable_execute_stack_test.c, it's a problem with the API itself and I don't think it can be fixed without replacing it with something more suitable for NetBSD. However, it does not seem to be actually used by programs created by clang, so I do not think it's worth looking into at the moment.

Demise and return of LLD

In my last report, I've mentioned that we've switched to using LLD as the linker for the second stage builds. Sadly, this was only to discover that some of the new test failures were caused exactly by that.

As I've reported back in January 2019, NetBSD's dynamic loader does not support executables with more than two segments. The problem has not been fixed yet, and we were so far relying on explicitly disabling the additional read-only segment in LLD. However, upstream started splitting the RW segment on GNU RELRO, effectively restoring three segments (or up to four, without our previous hack).

This forced me to initially disable LLD and return to GNU ld. However, upstream has suggested using -znorelro recently, and we were enable to go back down to two segments and reenable it.

libc++ system feature list update

Kamil has noticed that our feature list for libc++ is outdated. We have missed indicating that NetBSD supports aligned_alloc(), timespec_get() and C11 features. I have updated the feature list.

Current build bot status

The stage 1 build currently fails as upstream has broken libc++ builds with GCC. Hopefully, this will be fixed after the weekend.

Before that, we had a bunch of failing tests: 7 related to profiling and 4 related to XRay. Plus, the flaky LLDB tests mentioned earlier.

Core dump XState support finally in

I was working on including full register set ('XState') in core dumps before my summer vacation. I've implemented the requested changes and finally pushed them. The patch set included four patches:

  1. Include XSTATE note in x86 core dump, including preliminary support for machine-dependent core dump notes.

  2. Fix alignment when reading core notes fixing a bug in my tests for core dumps that were added earlier,

  3. Combine x86 register tests into unified test function simplifying the test suite a lot (by almost a half of the original lines).

  4. Add tests for reading registers from x86 core dumps covering both old and new notes.

NetBSD/i386 support for LLDB

As the next step in my LLDB work, I've started working on providing i386 support. This covers both native i386 systems, and 32-bit executable support on amd64. In total, the combined amd64/i386 support covers four scenarios:

  1. 64-bit kernel, 64-bit debugger, 64-bit executable (native 64-bit).

  2. 64-bit kernel, 64-bit debugger, 32-bit executable.

  3. 64-bit kernel, 32-bit debugger, 32-bit executable.

  4. 32-bit kernel, 32-bit debugger, 32-bit executable (native 32-bit).

Those cases are really different only from kernel's point-of-view. For scenarios 1. and 2. the debugger is using 64-bit ptrace API, while in cases 3. and 4. it is using 32-bit ptrace API. In case 2., the application runs via compat32 and the kernel fits its data into 64-bit ptrace API. In case 3., the debugger runs via compat32.

Technically, cases 1. and 2. are already covered by the amd64 code in LLDB. However, for user's convenience LLDB needs to be extended to recognize 32-bit processes on NetBSD and adjust the data obtained from ptrace to 32-bit executables. Cases 3. and 4. need to be covered via making the code build on i386.

Other LLDB plugins implement this via creating separate i386 and amd64 modules, then including 32-bit branch in amd64 that reuses parts of i386 code. I am following suit with that. My plan is to implement 32-bit process support for case 2. first, then port everything to i386.

So far I have implemented the code to recognize 32-bit processes and I have started implementing i386 register interface that is meant to map data from 64-bit ptrace register dumps. However, it does not seem to map registers correctly at the moment and I am still debugging the problem.

Future plans

As mentioned above, I am currently working on providing support for debugging 32-bit executables on amd64. Afterwards, I am going to work on porting LLDB to run on i386.

I am also tentatively addressing compiler-rt test suite problems in order to reduce the number of build bot failures. I also need to look into remaining kernel problems regarding simultaneous delivery of signals and breakpoints or watchpoints.

Furthermore, I am planning to continue with the items from the original LLDB plan. Those are:

  1. Add support to backtrace through signal trampoline and extend the support to libexecinfo, unwind implementations (LLVM, nongnu). Examine adding CFI support to interfaces that need it to provide more stable backtraces (both kernel and userland).

  2. Add support for aarch64 target.

  3. Stabilize LLDB and address breaking tests from the test suite.

  4. Merge LLDB with the base system (under LLVM-style distribution).

This work is sponsored by The NetBSD Foundation

The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue funding projects and services to the open-source community. Please consider visiting the following URL to chip in what you can:

January 12, 2020

NetBSD Blog GSoC 2019 Final Report: Incorporating the memory-hard Argon2 hashing scheme into NetBSD


We successfully incorporated the Argon2 reference implementation into NetBSD/amd64 for our 2019 Google Summer of Coding project. We introduced our project here and provided some hints on how to select parameters here. For our final report, we will provide an overview of what changes were made to complete the project.

Incorporating the Argon2 Reference Implementation

The Argon2 reference implementation, available here, is available under both the Creative Commons CC0 1.0 and the Apache Public License 2.0. To import the reference implementation into src/external, we chose to use the Apache 2.0 license for this project.

During our initial phase 1, we focused on building the libargon2 library and integrating the functionality into the existing password management framework via libcrypt. Toward this end, we imported the reference implementation and created the "glue" to incorporate the changes into /usr/src/external/apache. The reference implementation is found in

m2$ ls /usr/src/external/apache2/argon2                                                                                    
Makefile dist     lib      usr.bin
The Argon2 reference implementation provides both a library and a binary. We build the libargon2 library to support libcrypt integration, and the argon2(1) binary to provide a userland command-line tool for evaluation. To build the code, we add MKARGON2 to
_MKVARS.yes= \
        MKARGON2 \
and add the following conditional build to /usr/src/external/apache2/Makefile
.if (defined(MKARGON2) && ${MKARGON2} != "no")
SUBDIR+= argon2
After successfully building and installation, we have the following new files and symlinks
To incorporate Argon2 into the password management framework of NetBSD, we focused on libcrypt. In /usr/src/lib/libcrypt/Makefile, we first check for MKARGON2
.if (defined(MKARGON2) && ${MKARGON2} != "no")
If HAVE_ARGON2 is defined and enabled, we append the following to the build flags
.if defined(HAVE_ARGON2)
SRCS+=          crypt-argon2.c
CFLAGS+=        -DHAVE_ARGON2 -I../../external/apache2/argon2/dist/phc-winner
LDADD+=         -largon2 
As hinted above, our most significant addition to libcrypt is the file crypt-argon2.c. This file pulls in the functionality of libargon2 into libcrypt. Changes were also made to pw_gensalt.c to allow for parameter parsing and salt generation.

Having completed the backend support, we pull Argon2 into userland tools, such as pwhash(1), in the same way as above

.if ( defined(MKARGON2) && ${MKARGON2} != "no" )
Once built, we can specify Argon2 using the '-A' command-line argument to pwhash(1), followed by the Argon2 variant name, and any of the parameterized values specified in argon2(1). See our first blog post for more details. As an example, to generate an argon2id encoding of the password password using default parameters, we can use the following
m2# pwhash -A argon2id password
To simplify Argon2 password management, we can utilize passwd.conf(5) to apply Argon2 to a specified user or all users. The same parameters are accepted as for argon2(1). For example, to specify argon2i with non-default parameters for user 'testuser', you can use the following in your passwd.conf
m1# grep -A1 testuser /etc/passwd.conf 
        localcipher = argon2i,t=6,m=4096,p=1
With the above configuration in place, we are able to support standard password management. For example
m1# passwd testuser
Changing password for testuser.
New Password:
Retype New Password:

m1# grep testuser /etc/master.passwd  


The argon2(1) binary allows us to easily validate parameters and encoding. This is most useful during performance testing, see here. With argon2(1), we can specify our parameterized values and evaluate both the resulting encoding and timing.
m2# echo -n password|argon2 somesalt -id -p 3 -m 8
Type:           Argon2id
Iterations:     3
Memory:         256 KiB
Parallelism:    3
Hash:           97f773f68715d27272490d3d2e74a2a9b06a5bca759b71eab7c02be8a453bfb9
Encoded:        $argon2id$v=19$m=256,t=3,p=3$c29tZXNhbHQ$l/dz9ocV0nJySQ09LnSiqb
0.000 seconds
Verification ok
We provide one approach to evaluating Argon2 parameter tuning in our second post. In addition to manual testing, we also provide some ATF tests for pwhash, for both hashing and verification. These tests are focus on encoding correctness, matching known encodings to test results during execution.

tp: t_argon2_v10_hash
tp: t_argon2_v10_verify
tp: t_argon2_v13_hash
tp: t_argon2_v13_verify

cd /usr/src/tests/usr.bin/argon2

info: atf.version, Automated Testing Framework 0.20 (atf-0.20)
info: tests.root, /usr/src/tests/usr.bin/argon2


tc-so:Executing command [ /bin/sh -c echo -n password | \
argon2 somesalt -v 13 -t 2 -m 8 -p 1 -r ]
tc-end: 1567497383.571791, argon2_v13_t2_m8_p1, passed



We have successfully integrated Argon2 into NetBSD using the native build framework. We have extended existing functionality to support local password management using Argon2 encoding. We are able to tune Argon2 so that we can achieve reasonable performance on NetBSD. In this final post, we summarize the work done to incorporate the reference implementation into NetBSD and how to use it. We hope you can use the work completed during this project. Thank you for the opportunity to participate in the Google Summer of Code 2019 and the NetBSD project!

January 11, 2020

DragonFly BSD Digest In Other BSDs for 2020/01/11

No theme evolved, but lots more links this week.



Kimmo Suominen Cherry-picked patches for ncurses

In order to reduce the number of vulnerabilities on my systems, I added some patches to devel/ncurses (and devel/ncursesw) to address CVE-2018-19211, CVE-2019-17594, and CVE-2019-17595. Version 6.1nb7 includes the patches.

January 09, 2020

DragonFly BSD Digest cpdup and microseconds

cpdup(1), a DragonFly copying tool that really should be more used, now uses microseconds for comparison.  This is probably related to the sysctl vfs.timestamp_precision also now using microseconds.

This probably won’t affect your usage of cpdup unless you are copying some very actively modified files, but I like to mention it in case someone feels like porting it to OpenBSD/NetBSD – it’s already in FreeBSD, though I assume it’s a slightly older version.

Roy Marples structure padding in C

Whilst developing Privilege Separation in dhcpcd, I had to come up with an IPC design for it. Of course, that involves creating structures.

So far, my structures in dhcpcd are long lived - or rather the scope is design to live outside of where it was created. As such they are created on the heap and are at the mercy of malloc. Generally I use calloc so that the whole area is inited to zero as uninitialised memory is bad.

So I decided to start out and see if I can just create the structures I need on the stack. Turns out I could! Yay! Now, how to you initialise a structure on the stack to all zeros? First let us consider this structure:

struct ps_addr {
    sa_family_t psa_family;
    union {
        struct in_addr psau_in_addr;
        struct in6_addr psau_in6_addr;
    } psa_u;
#define psa_in_addr     psa_u.psau_in_addr
#define psa_in6_addr    psa_u.psau_in6_addr

The first way is memset:

struct ps_addr psa;

memset(&psa, 0, sizeof(psa));
psa.psa_family = AF_INET;

But what if you could avoid memset? Luckily the C standard allows setting any member and will zero all other members. So we can do this:

struct ps_addr psa = { .psa_family = AF_INET };

Wow!!! So simple. This reduces binary size a fair bit. But then I turned on the Memory Sanitiser and boom, it crashed hard. Why?

The answer is simple - padding. Eric S Raymond gives a very good writeup about the problem. Basically, the standard will initialise any unintialised members to zero - but padding added for alignent isn't a member! So we need to ensure that our structure requires zero padding.

Here is the new struct:

struct ps_addr {
    sa_family_t psa_family;
    uint8_t psa_pad[4 - sizeof(sa_family_t)];
    union {
        struct in_addr psau_in_addr;
        struct in6_addr psau_in6_addr;
    } psa_u;
#define psa_in_addr     psa_u.psau_in_addr
#define psa_in6_addr    psa_u.psau_in6_addr

And it allows the former structure initialisation to work and memory sanitisers are happy - so happy days :) Now, if anyone can tell me what I can use instead of the magic number 4 in the above I'd be even happier!

January 07, 2020

Kimmo Suominen Restored correct psify source

I have restored fetching of correct upstream files for print/psify. Looks like we pointed to the wrong files for 13 years.

Kimmo Suominen Updated rsnapshot

I updated sysutils/rsnapshot to 1.4.3 in another long-overdue commit (notable changes).

Kimmo Suominen Updated tcptraceroute6

I updated net/tcptraceroute6 to 1.0.4 to incorporate portability fixes from upstream.

January 06, 2020

Unix Stack Exchange Is there anything similar to CrystalDiskMark for UNIX?

If you're shopping for an SSD, you've surely seen one of those screenshots from CrystalDiskMark with a few green squares, and the 2x4 matrix with results of doing read/write tests for the given hardware:

However, I've never seen anything similar for UNIX — at most what you get is a dd for sequential read/write tests, which gives no indication on the IOPS parameters of the hardware in question.

Is there anything similar to CrystalDiskMark for UNIX to perform various read/write tests like 4KiB Q8T8 etc?

I searched and found the following items in OpenBSD ports, but they seem rather stale (to say the least — randread is still hosted on SourceForge in 2020 and BYTE magazine has reportedly ceased online publication in 2013), and none of these tools make any mention of evaluating modern SSD performance, for which you probably have to have some sort of extra code to deal with the IO queues and threads or whatnot: pkgsrc-2019Q4 released

January 03, 2020

Roy Marples dhcpcd-8.1.5 released

with the following changes:

If you are suffering from IPv6 addresses not transitioning from the tentative state (regression from dhcpcd-8.1 on Linux) you will need to do one of the following after installing dhcpcd:

December 28, 2019

DragonFly BSD Digest In Other BSDs for 2019/12/28

Quiet week, so catch up on your reading here.


December 21, 2019

Stack Overflow Rump unikernel compile successfully on a server but error on Ubuntu 19.04

I downloaded rumprun and followed the Tutorial to compile and install it.

git submodule update --init
CC=cc ./ hw

I have compiled successfully on a Ubuntu 18.04 system. When I copied the same code on that system to my system Ubuntu 19.04 to compile. It failed to compile. There were many error messages. Many of them are some warning. For example,

rumprun/src-netbsd/sys/rump/dev/lib/libbpf/../../../../sys/conf.h:129:19: error: cast between incompatible function types from 'int (*)(void)' to 'int (*)(dev_t,  int,  int,  struct lwp *)' {aka 'int (*)(long unsigned int,  int,  int,  struct lwp *)'} [-Werror=cast-function-type]
 #define noclose  ((dev_type_close((*)))enodev)
rumprun/src-netbsd/sys/rump/dev/lib/libbpf/../../../../net/bpf.c:184:13: note: in expansion of macro 'noclose'
  .d_close = noclose,
rumprun/src-netbsd/sys/rump/dev/lib/libbpf/../../../../sys/conf.h:130:18: error: cast between incompatible function types from 'int (*)(void)' to 'int (*)(dev_t,  struct uio *, int)' {aka 'int (*)(long unsigned int,  struct uio *, int)'} [-Werror=cast-function-type]
 #define noread  ((dev_type_read((*)))enodev)
rumprun/src-netbsd/sys/rump/dev/lib/libbpf/../../../../net/bpf.c:185:12: note: in expansion of macro 'noread'
  .d_read = noread,

I searched for these compiling errors but I haven't found the same bugs. I don't know why this happens. How can I do to solve this problem? Thanks for your help!

December 20, 2019

Roy Marples dhcpcd-8.1.4 and dhcpcd-7.2.5 released

with the following change:

This issue has been around since 7.1.0 (bad me) so I've quickly released new versions to fix.

Roy Marples dhcpcd-8.1.3 released

with the following changes:

December 16, 2019 New Security Advisory: NetBSD-SA2019-006

December 13, 2019

Super User NetBSD - no pkg

After full installation of latest NetBSD I'm tried to launch pkgin, but received pkgin not found, also I've got same for pkgsrc. Then I've found, that there's no /usr/pkg location.

That's normal or I've did something wrong?

December 12, 2019

NetBSD Blog Clang build bot now uses two-stage builds, and other LLVM/LLDB news

Upstream describes LLDB as a next generation, high-performance debugger. It is built on top of LLVM/Clang toolchain, and features great integration with it. At the moment, it primarily supports debugging C, C++ and ObjC code, and there is interest in extending it to more languages.

In February, I have started working on LLDB, as contracted by the NetBSD Foundation. So far I've been working on reenabling continuous integration, squashing bugs, improving NetBSD core file support, extending NetBSD's ptrace interface to cover more register types and fix compat32 issues, and fixing watchpoint support. In October 2019, I've finished my work on threading support (pending pushes) and fought issues related to upgrade to NetBSD 9.

November was focused on finally pushing the aforementioned patches and major buildbot changes. Notably, I was working on extending the test runs to compiler-rt which required revisiting past driver issues, as well as resolving new ones. More details on this below.

LLDB changes

Test updates, minor fixes

The previous month has left us with a few regressions caused by the kernel upgrade. I've done my best to figure out those I could reasonably fast; for the remaining ones Kamil suggested that I mark them XFAIL for now and revisit them later while addressing broken tests. This is what I did.

While implementing additional tests in the threading patches, I've discovered that the subset of LLDB tests dedicated to testing lldb-server behavior was disabled on NetBSD. I've reenabled lldb-server tests and marked failing tests appropriately.

After enabling and fixing those tests, I've implemented missing support in the NetBSD plugin for getting thread name.

I've also switched our process plugin to use the newer PT_STOP request over calling kill(). The main advantage of PT_STOP is that it reliably notifies about SIGSTOP via wait() even if the process is stopped already.

I've been able to reenable EOF detection test that was previously disabled due to bugs in the old versions of NetBSD 8 kernel.

Threading support pushed

After satisfying the last upstream requests, I was able to merge the three threading support patches:

  1. basic threading support,

  2. watchpoint support in threaded programs,

  3. concurrent watchpoint fixes.

This fixed 43 tests. It also triggered some flaky tests and a known regression and I'm planning to address them as the part of final bug cracking.

Build bot redesign

Recap of the problems

The tests of clang runtime components (compiler-rt, openmp) are performed using freshly built clang. This version of clang attempts to build and link C++ programs with libc++. However, our clang driver naturally requires system installation of libc++ — after all, we normally don't want the driver to include temporary build paths for regular executables! For this reason, building against fresh libc++ in build tree requires appropriate -cxx-isystem, -L and -Wl,-rpath flags.

So far, we managed to resolve this via using existing mechanisms to add additional flags to the test compiler calls. However, the existing solutions do not seem to suffice for compiler-rt. While technically I could work on adding more support code for that, I've decided it's better to look for a more general and permanent solution.

Two-stage builds

As part of the solution, I've proposed to switch our build bot to a two-stage build model. That is, firstly we're using the system GCC version to build a minimal functioning clang. Then, we're using this newly-built clang to build the whole LLVM suite, including another copy of clang.

The main advantage of this model is that we're verifying whether clang is capable of building a working copy of itself. Additionally, it insulates us against problems with host GCC. For example, we've experienced issues with GCC 8 and the default -O3. On the negative side, it increases build time significantly, especially that the second stage needs to be rebuilt from scratch every time.

A common practice in compiler world is to actually do three stages. In this case, it would mean building minimal clang with host compiler, then second stage with first stage clang, then third stage using second stage's clang. This would have the additional benefit of verifying that clang is capable of building a compiler that's fully capable of building itself. However, this seems to have little actual gain for us while it would increase the build time even more.

Compiler wrappers

Another interesting side effect of using the two-stage build model is that it proves an opportunity of injecting wrappers over clang and clang++ built in the first stage. Those wrappers allows us to add necessary -I, -L and -Wl,-rpath arguments without having to patch the driver for this special case.

Furthermore, I've used this opportunity to add experimental LLD usage to the first stage, and use it instead of GNU ld for the second stage. The LLVM linker has a significantly smaller memory footprint and therefore allows us to improve build efficiency. Sadly, proper LLD support for NetBSD still depends on patches that are waiting for upstream review.

Compiler-rt status and tests

The builds of compiler-rt have been reenabled for the build bot. I am planning to start enabling individual test groups (e.g. builtins, ASAN, MSAN, etc.) as I get them to work. However, there are still other problems to be resolved before that happens.

Firstly, there are new test regressions. Some of them seem to be specifically related to build layout changes, or to use of LLD as linker. I am currently investigating them.

Secondly, compiler-rt tests aim to test all supported multilib targets by default. We are currently preparing to enable compat32 in the kernel on the host running build bot and therefore achieve proper multilib suppor for running them.

Thirdly, ASAN, MSAN and TSAN are incompatible with ASLR (address space layout randomization) that is enabled by default on NetBSD. Furthermore, XRay is incompatible with W^X restriction.

Making tests work with PaX features

Previously, we've already addressed the ASLR incompatibility by adding an explicit check for it and bailing out if it's enabled. However, while this somehow resolves the problem for regular users, it means that the relevant tests can't be run on hosts having ASLR enabled.

Kamil suggested that we should use paxctl to disable ASLR per-executable here. This has the obvious advantage that it enables the tests to work on all hosts. However, it required injecting the paxctl invocation between the build and run step in relevant tests.

The ‘obvious’ solution to this problem would be to add a kind of %paxctl_aslr substitution that evaluates to paxctl call on NetBSD, and to : (no-op) on other systems. However, this required updating all the relevant tests and making sure that the invocation keeps being included in new tests.

Instead, I've noticed that the %run substitution is already using various kinds of wrappers for other targets, e.g. to run tests via an emulator. I went for a more agreeable solution of substituting %run in appropriate test suites with a tiny wrapper calling paxctl before executing the test.

Clang/LLD dependent libraries feature

Introduction to the feature

Enabling the two stage builds had also another side effect. Since stage 2 build is done via clang+LLD, a newly added feature of dependent libraries got enabled and broke our build.

Dependent libraries are a feature permitting source files to specify additional libraries that are afterwards injected into linker's invocation. This is done via a #pragma originally used by MSVC. Consider the following example:

#include <stdio.h>
#include <math.h>
#pragma comment(lib, "m")

int main() {
    printf("%f\n", pow(2, 4.3));
    return 0;

When the source file is compiled using Clang on an ELF target, the lib comments are converted into .deplibs object section:

$ llvm-readobj -a --section-data test.o
  Section {
    Index: 6
    Name: .deplibs (25)
    Flags [ (0x30)
      SHF_MERGE (0x10)
      SHF_STRINGS (0x20)
    Address: 0x0
    Offset: 0x94
    Size: 2
    Link: 0
    Info: 0
    AddressAlignment: 1
    EntrySize: 1
    SectionData (
      0000: 6D00                                 |m.|

When the objects are linked into a final executable using LLD, it collects all libraries from .deplibs sections and links to the specified libraries.

The example program pasted above would have to be built on systems requiring explicit -lm (e.g. Linux) via:

$(CC) ... test.c -lm

However, when using Clang+LLD, it is sufficient to call:

clang -fuse-ld=lld ... test.c

and the library is included automatically. Of course, this normally makes little sense because you have to maintain compatibility with other compilers and linkers, as well as old versions of Clang and LLD.

Use of LLVM to approach static library dependency problem

LLVM started using the deplibs feature internally in D62090 in order to specify linkage between runtimes and their dependent libraries. Apparently, the goal was to provide an in-house solution to the static library dependency problem.

The problem discussed is that static libraries on Unix-derived platforms are primitive archives containing object files. Unlike shared libraries, they do not contain lists of other libraries they depend on. As a result, when linking against a static library, the user needs to explicitly pass all the dependent libraries to the linker invocation.

Over years, a number of workarounds were proposed to relieve the user (or build system) from having to know the exact dependencies of the static libraries used. A few worth noting include:

The first two solutions work at build system level, and therefore are portable to different compilers and linkers. The third one requires linker support but have been used successfully to some degree due to wide deployment of GNU binutils, as well as support in other linkers (e.g. LLD).

Dependent libraries provide yet another attempt to solve the same problem. Unlike the listed approaches, it is practically transparent to the static library format — at the cost of requiring both compiler and linker support. However, since the runtimes are normally supposed to be used by Clang itself, at least the first of the points can be normally assumed to be satisfied.

Why it broke NetBSD?

After all the lengthy introduction, let's get to the point. As a result of my changes, the second stage is now built using Clang/LLD. However, it seems that the original change making use of deplibs in runtimes was tested only on Linux — and it caused failures for us since it implicitly appended libraries not present on NetBSD.

Over time, users of a few other systems have added various #ifdefs in order to exclude Linux-specific libraries from their systems. However, this solution is hardly optimal. It requires us to maintain two disjoint sets of rules for adding each library — one in CMake for linking of shared libraries, and another one in the source files for emitting dependent libraries.

Since dependent libraries pragmas are present only in source files and not headers, I went for a different approach. Instead of using a second set of rules to decide which libraries to link, I've exported the results of CMake checks into -D flags, and made dependent libraries conditional on CMake check results.

Firstly, I've fixed deplibs in libunwind in order to fix builds on NetBSD. Afterwards, per upstream's request I've extended the deplibs fix to libc++ and libc++abi.

Future plans

I am currently still working on fixing regressions after the switch to two-stage build. As things develop, I am also planning to enable further test suites there.

Furthermore, I am planning to continue with the items from the original LLDB plan. Those are:

  1. Add support to backtrace through signal trampoline and extend the support to libexecinfo, unwind implementations (LLVM, nongnu). Examine adding CFI support to interfaces that need it to provide more stable backtraces (both kernel and userland).

  2. Add support for i386 and aarch64 targets.

  3. Stabilize LLDB and address breaking tests from the test suite.

  4. Merge LLDB with the base system (under LLVM-style distribution).

This work is sponsored by The NetBSD Foundation

The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue funding projects and services to the open-source community. Please consider visiting the following URL to chip in what you can:

December 02, 2019

NetBSD Blog First release candidate for NetBSD 9.0 available!

Since the start of the release process four months ago a lot of improvements went into the branch - more than 500 pullups were processed!

This includes usbnet (a common framework for usb ethernet drivers), aarch64 stability enhancements and lots of new hardware support, installer/sysinst fixes and changes to the NVMM (hardware virtualization) interface.

We hope this will lead to the best NetBSD release ever (only to be topped by NetBSD 10 next year).

Here are a few highlights of the new release:

You can download binaries of NetBSD 9.0_RC1 from our Fastly-provided CDN.

For more details refer to the official release announcement.

Please help us out by testing 9.0_RC1. We love any and all feedback. Report problems through the usual channels (submit a PR or write to the appropriate list). More general feedback is welcome, please mail releng. Your input will help us put the finishing touches on what promises to be a great release!



December 01, 2019

Server Fault ssh tunnel refusing connections with "channel 2: open failed"

All of a sudden (read: without changing any parameters) my netbsd virtualmachine started acting oddly. The symptoms concern ssh tunneling.

From my laptop I launch:

$ ssh -L 7000:localhost:7000 [email protected] -N -v

Then, in another shell:

$ irssi -c localhost -p 7000

The ssh debug says:

debug1: Connection to port 7000 forwarding to localhost port 7000 requested.
debug1: channel 2: new [direct-tcpip]
channel 2: open failed: connect failed: Connection refused
debug1: channel 2: free: direct-tcpip: listening port 7000 for localhost port 7000, connect from port 53954, nchannels 3

I tried also with localhost:80 to connect to the (remote) web server, with identical results.

The remote host runs NetBSD:

bash-4.2# uname -a
NetBSD host 5.1_STABLE NetBSD 5.1_STABLE (XEN3PAE_DOMU) #6: Fri Nov  4 16:56:31 MET 2011  [email protected]:/m/obj/m/src/sys/arch/i386/compile/XEN3PAE_DOMU i386

I am a bit lost. I tried running tcpdump on the remote host, and I spotted these 'bad chksum':

09:25:55.823849 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 67, bad cksum 0 (->3cb3)!) > P, cksum 0xfe37 (incorrect (-> 0xa801), 1622402406:1622402421(15) ack 1635127887 win 4096 <nop,nop,timestamp 5002727 5002603>

I tried restarting the ssh daemon to no avail. I haven't rebooted yet - perhaps somebody here can suggest other diagnostics. I think it might either be the virtual network card driver, or somebody rooted our ssh.

Ideas..? New Developer in November 2019

November 26, 2019 New Security Advisory: NetBSD-SA2019-005

November 25, 2019

Super User What device does NetBSD use for a USB modem?

I'm testing some software on NetBSD 8.1 x86_64. The software opens a USB modem and issues AT commands. The software tested OK on Debian, Fedora, OS X, and OpenBSD. The software is having trouble on NetBSD.

NetBSD's dmesg shows:

umodem0 at uhub1 port 1 configuration 2 interface 0
umodem0: U.S.Robotics (0xbaf) USB Modem (0x303), rev 2.00/2.00, addr 2, iclass 2/2
umodem0: data interface 1, has CM over data, has break
umodem0: status change notification available
ucom0 at umodem0

If I am parsing the NetBSD man pages properly (which may not be the case), I should be able to access the modem via /dev/ucom0. Also see UMODEM(4) man page.

The test user is part of the dialer group. The software was not able to open /dev/ucom0, /dev/umodem0, ucom0 or umodem0. All open's result in No such file or directory. Additionally, there are no /dev/ttyACMn or /dev/cuaUn devices.

How do I access the modem on NetBSD?

November 17, 2019

Stack Overflow Compile only kernel module on NetBSD

Is there a way to compile only a kernel module on NetBSD? I can't seem to figure out how to do so without recompiling the entire kernel. Thanks!

October 25, 2019

Stack Overflow How can I make a NetBSD VM halt itself in google compute engine

I've got a batch job that I want to run in google compute engine on a NetBSD instance. I expected that I could just shutdown -hp now at the end of the job and the instance would be powered off. But when I do that it still remains in the running state according to the google cloud console and CLI. How do I make a NetBSD virtual machine in google cloud shut itself off when it's no longer needed?

Note: Google cloud SDK is not available on NetBSD

August 23, 2019

Unix Stack Exchange NetBSD - Unable to install pkgin

I'm running NetBSD on the Raspberry Pi 1 Model B.

uname -a
NetBSD rpi 7.99.64 NetBSD 7.99.64 (RPI.201703032010Z) evbarm

I'm trying to install pkgin but I'm receiving an error about version mismatch ...

pkg_add -f pkgin
pkg_add: Warning: package `pkgin-0.9.4nb4' was built for a platform:
pkg_add: NetBSD/earmv6hf 7.99.42 (pkg) vs. NetBSD/earmv6hf 7.99.64 (this host)
pkg_add: Warning: package `pkg_install-20160410nb1' was built for a platform:
pkg_add: NetBSD/earmv6hf 7.99.58 (pkg) vs. NetBSD/earmv6hf 7.99.64 (this host)
pkg_add: Can't create pkgdb entry: /var/db/pkg/pkg_install-20160410nb1: Permission denied
pkg_add: Can't install dependency pkg_install>=20130901, continuing
pkg_add: Warning: package `libarchive-3.3.1' was built for a platform:
pkg_add: NetBSD/earmv6hf 7.99.59 (pkg) vs. NetBSD/earmv6hf 7.99.64 (this host)
pkg_add: Can't create pkgdb entry: /var/db/pkg/libarchive-3.3.1: Permission denied
pkg_add: Can't install dependency libarchive>=3.2.1nb2, continuing
pkg_add: Can't create pkgdb entry: /var/db/pkg/pkgin-0.9.4nb4: Permission denied
pkg_add: 1 package addition failed

How can I install the correct version?

August 20, 2019

Super User How to run a Windowed JAR file over SSH without installing JRE and without root access on NetBsd?

First, I can use Java, but for what I want to achieve (building a database where othe only application supporting the format is in Java), I need 100Gb of RAM during 20 hours.

I have access to a server with the required RAM, but not as root and no JRE is available. The same is true for the Xorg libraries.

Here’s the uname :

8.0_STABLE NetBSD 8.0_STABLE (GENERIC) #0: Sun Feb 24 10:50:49 UTC 2019  [email protected]:/usr/src/sys/arch/amd64/compile/GENERIC amd64

The Linux layer is installed, but nothing else is installed : not even Glibc, so the only applications which can be run are the ones which are statically compiled.

So not only Java isn’t Installed, but some of the require shared libraries are missing…
However, I have full write access to my $HOME directory, and I can run my own executables from there.

Is a way to convert a Jar file into a NetBsd Executable or Linux statically linked executable ? I have also the source code of the Jar file if compiling is an acceptable solution.

I only found about ɢᴄᴊ, but I’m unsure if Java 7 is supported…

Amitai Schlair Announcing notqmail

Running my own email server has its challenges. Chief among them: my favorite mail-server software hasn’t been updated since I started using it in 1998.

The qmail logo
qmail logo

Okay, that’s not entirely true. While qmail hasn’t been updated by its original author, a group of respected users created netqmail, a series of tiny updates that were informed, conservative, and careful. By their design, it was safe for everyone running qmail to follow netqmail, so everyone did. But larger changes in the world of email — authentication, encryption, and ever-shifting anti-spam techniques — remained as puzzles for each qmail administrator to solve in their own way. And netqmail hasn’t been updated since 2007.

One fork per person

In the interim, devotees have continued maintaining their own individual qmail forks. Some have shared theirs publicly. I’ve preferred the design constraints of making minimal, purpose-specific, and conflict-avoidant add-ons and patches. Then again, these choices are motivated by the needs of my qmail packaging, which I suppose is itself a de facto fork.

I’ve found this solo work quite satisfying. I’ve learned more C, reduced build-time complexity, added run-time configurability, and published unusually polished and featureful qmail packages for over 20 platforms. Based on these experiences, I’ve given dozens of workshops and talks. In seeking to simplify system administration for myself and others, I’ve become a better programmer and consultant.

Still, wouldn’t it be more satisfying if we could somehow pool our efforts? If, long after the end of DJB’s brilliant one-man show, a handful of us could shift how we relate to this codebase — and to each other — in order to bring a collaborative open-source effort to life? If, with netqmail as inspiration, we could produce safe updates while also evolving qmail to meet more present-day needs?

One fork per community

My subtle artwork
notqmail logo == qmail logo overlaid by a red circle with a slash through it

Say hello to notqmail.

Our first release is informed, conservative, and careful — but bold. It reflects our brand-new team’s rapid convergence on where we’re going and how we’ll get there. In the span of a few weeks, we’ve:

I say “bold” because, for all the ways we intend to hew to qmail tradition, one of our explicit goals is a significant departure. Back in the day, qmail’s lack of license, redistribution restrictions, technical barriers, and social norms made it hard for OS integrators to create packages, and hard for package users to get help. netqmail 1.06 expressed a desire to change this. In notqmail 1.07, we’ve made packaging much easier. (I’ve already updated pkgsrc from netqmail to notqmail, and some of my colleagues have prepared notqmail RPM and .deb packages.) Further improvements for packagers are part of what’s slated for 1.08.

What’s next

Looking much further ahead, another of our explicit goals is “Meeting all common needs with OS-provided packages”. We have a long way to go. But we couldn’t be off to a better start.

By our design, we believe we’ve made it safe for everyone running qmail to follow notqmail. We hope you’ll vet our changes carefully, then update your installations to notqmail 1.07. We hope you’ll start observing us as we continue the work. We hope you’ll discuss freely on the qmail mailing list. We hope you’ll be a part of the qmail revival in ways that are comfortable for you. And we hope that, in the course of time, notqmail will prove to be the community-driven open-source successor to qmail.

August 11, 2019

Unix Stack Exchange How to use resize_ffs in netbsd

I'm trying to use resize_ffs with netbsd to increase the size of my partition. I have NetBSD running in a virtual machine, and have expanded the size of the disk, and now wish to grow the partition to match.

The man_page for the tool is here

I am trying to grow a 300mb partition to 1gb.

The tool manpage says that specifiying a size is not mandatory, and that if it is not specified it will grow to use available space (ideal behaviour), however this results in an error saying newsize not known.

I have used various online tools to try and calculate the disk geomtery, but no matter what I try when I pass a number to -s, I get the error 'bad magic number'.

I have been unable to find example of using this tool online.

What is the correct way to use resize_ffs to grow a partition to use available disk space?

July 31, 2019

NetBSD Package System (pkgsrc) on DaemonForums xf86-input-keyboard, xf86-video-vmware, unrecoverable error
Hello everybody:

I'm still trying to work NetBSD with. Complicated OS, at least in this stage of development. I wonder "how can I use it as desktop graphical OS, if it can't be installed xf86-input-keyboard, or xf86-video-vmware, and so on?"

Theses are not packages stored in

They are not stored under any release of NetBSD for i386 systems.

All of them must be installed from source...

But, an error arises, always, ... randrproto>1.6.0 needed

This is not an error of NetBSD, but at this time it has not been solved, and seems to be an endless error, among the next releases of NetBSD.

According documents on the net this bug is solved using xorgproto instead of randrproto, but does not solve anything, really, the bug is always present, not fixed anyway.

Does anybody have a binary package for xf86-input-keyboard, ?

A package that should be installed without thes issues?

Thank you all for your help.

P.S.: My NetBSD is 8.0 release, installed in a VMWared environment under Win.7.

July 30, 2019

Unix Stack Exchange pkgin installation problem (NetBSD)

I just installed NetBSD 7.1.1 (i386) on my old laptop.

During the installation, I could not install pgkin (I don't know why), so I skipped it and now I have a NetBSD 7.1.1 installed on my laptop without pkgin.

My problem is "How to install pkgin on NetBSD (i386) ?"

I found this (Click) tutorial and I followed it:

I tried :

#export PKG_PATH=""
# pkg_add -v pkgin

And I got :

pkg_add: Can't process*: Not Found
pkg_add: no pkg found for 'pkgin',sorry.
pkg_add: 1 package addition failed

I know this is a wrong command because this ftp address is for amd64 while my laptop and this NetBSD is i386. (I can't find the correct command for i386 )

I also followed instructions of (Click), and I did

git clone

on another computer and copied the output (which is a folder name pkgin) to my NetBSD (my NetBSD doesn't have 'git' command)

and then I did :

./configure --prefix=/usr/pkg --with-libraries=/usr/pkg/lib --with-includes=/usr/pkg/include

and then :


but I got :

#   compile  pkgin/summary.o
gcc -O2    -std=gnu99    -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wno-sign-compare  -Wno-traditional  -Wa,--fatal-warnings -Wreturn-type -Wswitch -Wshadow -Werror    -DPKGIN_VERSION=\""0.9.4 for NetBSD-7.1.1 i386"\" -DNETBSD  -g -DLOCALBASE=\"/usr/local\"           -DPKG_SYSCONFDIR=\"/usr/local/etc\"         -DPKG_DBDIR="\"/var/db/pkg\""           -DDEF_LOG_DIR="\"/var/db/pkg\""         -DPKGIN_DB=\"/var/db/pkgin\"            -DPKGTOOLS=\"/usr/local/sbin\" -DHAVE_CONFIG_H -D_LARGEFILE_SOURCE -D_LARGE_FILES -DCHECK_MACHINE_ARCH=\"i386\" -Iexternal -I. -I/usr/local/include  -c    summary.c
*** Error code 1

make: stopped in /root/pkgin

I think this error occurs because of the dependencies. (which is mentioned in but still, don't know how to install those dependencies.

EDIT: I found "" but it still says

no pkg fond for 'pkgin', sorry


** I solved the problem by writing 7.1 instead of 7.1.1**

July 29, 2019

NetBSD General on DaemonForums Fighting with NetBSD installig packages
There is a video explaining how to install NetBSD 8.0. I followed that video, and theres is something I couldn't find in docs about NetBSD. Installing Bash and using pkgin inside it enables to install packages that in other way can't be installed.

In turn, when I tried to install xf86-input-vmware, xf86-input-keyboard and xf86-video-vmware... these packages are not in the repository at all.

Looking for the net I found theses packages in an ftp site of SmartOS, that uses NetBSD packages.

I downloaded these packages, I have installed video-vmware and input-vmmouse using pkg_add -f program_name.tgz.

The package xf86-input-keyboard gives an error that "keyring" not found, and can't be installed.

The question is, why, if the video shows how install those packages directly by using pkgin install program_name, those packages don't exist anymore in NetBSD repositories.

Using pkgsrc and make install clean gives an unrecoverable error about randrproto>1.6.0 is needed.

I hope NetBSD will update repositories, because it is very difficult to work this OS with.

Does anybody I help with this?

Unix Stack Exchange How to install directly from a package *.tgz file in NetBSD, OpenBSD, or FreeBSD

Is there any way to install software from the *.tgz file that is its package, in NetBSD? Or indeed in operating systems with similar package managers such as OpenBSD or FreeBSD?

For example, I can install the nano editor on NetBSD using this command:

pkgin nano

(I could do the same with a similar pkg install nano command on FreeBSD.)

What if I download the package file directly from the operating system's package repository, which would be a URL like

Having obtained the package file from the repository by hand like this, is there any way to now install nano directly from it? How do I do that?

July 13, 2019

Jeremy C. Reed 2019-July-13 pfSense Essentials Book Writing

This week I received my printed proof from the printer and enabled it to be printed. It is now for sale at Amazon and Barnes and Noble,

I set an alarm to work on it very early a few days a week and it took me a few years. (I am blessed to only commute a few times a year, so I make sure that I don't waste that gifted time.)

This book was written using Docbook using NetBSD and vi. The print-ready book was generated with Dblatex version 0.3.10 with a custom stylesheet, pdfTeX 3.14159265-2.6-1.40.19 (Web2C 2018), and the TeX document production system installed via Tex Live and Pkgsrc. Several scripts and templates were created to help have a consistent document.

The book work was managed using the Subversion version control software. I carefully outlined my steps in utilizing the useful interfaces and identified every web and console interface. The basic writing process included adding over 350 special comment tags in the docbook source files that identified topics to cover and for every pfSense web interface PHP script (highlighting if they were main webpages from the pfSense menu). As content was written, I updated these special comments with a current status. A periodic script checked the docbook files and the generated book and reported on writing progress and current needs.

During this writing, nearly every interface was tested. In addition, code and configurations were often temporarily customized to simulate various pfSense behaviors and system situations. Most of the pfSense interface and low-level source code was studied, which helped with identifying pfSense configurations and features that didn't display in standard setups and all of its options. The software was upgraded several times and installed and ran in multiple VMs and hardware environments with many wireless and network cards, including with IPv6. In addition, third-party documentation and even source code was researched to help explain pfSense configurations and behaviors.

As part of this effort, I documented 352 bugs (some minor and some significant) and code suggestions that I found from code reading or from actual use of the system. (I need to post that.)

The first subversion commit for this book was in July 2014. It has commits in 39 different months with 656 commits total. The book's docbook source had 3789 non-printed comments and 56,193 non-blank lines of text. The generated book has over 180,000 words. My subversion logs show I have commits on 41 different Saturdays. Just re-reading with cleanup took me approximately 160 hours.

July 11, 2019

Stack Overflow configuration of tty on BSD system

For a command like this one on Linux debian-linux 4.19.0-1-amd64 #1 SMP Debian 4.19.12-1 (2018-12-22) x86_64 GNU/Linux with xfce I get :

[email protected]:~$ dbus-send --system --type=method_call --print-reply --dest
=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListActivatable  

The same command on OpenBSD LeOpenBSD 6.4 GENERIC.MP#364 amd64 with xfce I get :

ktop/DBus org.freedesktop.DBus.ListActivatableNames   <

On linux, at the end of screen, we go to next line.
On BSD(OpenBSD-NetBSD), the command line continue on the same line and the first words disapear.
It's the same in xfce-terminal-emulator, xterm or in TTY (Alt-Ctrl-F3)

I try to add am in gettytab in the defaut section with no avail.
Termcap man page say :
If the display wraps around to the beginning of the next line when the cursor reaches the right margin, then it should have the am capability.
What can I do ?

July 09, 2019

NetBSD Package System (pkgsrc) on DaemonForums Zabbix Frontend Dependencies
Hi All
I used pkgsrc to install the zabbix frontend. I notice though that it automatically installs some php71 dependencies. I really wanted to use php73 though as php71 has some vulns. Is there a way to do that?

July 08, 2019

Server Fault Webserver farm with NFS share (autofs failure)

I am trying to set up the farm of webservers, consisting of the internal, external and worker servers.

  1. The actual sites content is stored on internal NFS server deep in internal network. All sites contents management is centralized.

  2. BSD-based external servers have Lighttpd doing all the HTTP/HTTPS job, serving static content. Main NFS share is auto-mounted via special path, like /net/server/export/www/site/ (via amd).

  3. Every Lighttpd have fastcgi parameters pointing to several worker servers, which have php-fpm working (for example). Different sites may require different php versions or arrangement, so www01 and www02 may serve site "A" having php-fpm over PHP 5.6 and www05 and www06 will serve site "B" having php-fpm over PHP 7.2.

  4. Every worker get requests for certain sites (one or more) with path /net/server/export/www/site and execute PHP or any other code. They also have amd (for BSD) and autofs (for Linux) working.

  5. For some sites Lighttpd may not forward fastcgi, but do proxying instead, so workers can have Apache or other web-server (even Java-based) working.

External servers are always BSD, internal servers too, but workers can be different upon actual needs.

This all work good when workers are BSD. If we are using Linux on workers - it stops working when share is automatically unmounted. When one tries to access the site he will get error 404. When I connect to server via ssh I will see no mounted share on "df -h". If I do any "ls" on /net/server/export - it is self-mounted as intended and site starts to work. On BSD-systems df show amd shares always mounted despite of 60 seconds dismount period.

I believe there is a difference between amd and autofs approach, php-fpm calls on Linux become some kind of "invisible" to autofs and do not cause auto-mount, because any other access to /net/server/ work at any time and do cause auto-mount. Also, this happens not with php-fpm only, Apache serving static content on auto-mounted NFS share behave same way.

Sorry for long description, but I tried to describe it good. The main question here - is anyone know why calls to /net/server may not cause auto-mount in autofs and how to prevent this behavior.

For lot of reasons I do not consider using static mounting, so this is not an option here. As for Linux versions - mostly it was tested on OEL 7.recent.

July 04, 2019

OS News OpenBSD is now my workstation
Why OpenBSD? Simply because it is the best tool for the job for me for my new-to-me Lenovo Thinkpad T420. Additionally, I do care about security and non-bloat in my personal operating systems (business needs can have different priorities, to be clear). I will try to detail what my reasons are for going with OpenBSD (instead of GNU/Linux, NetBSD, or FreeBSD of which I’m comfortable using without issue), challenges and frustrations I’ve encountered, and what my opinions are along the way. I’ve never managed to really get into the BSDs, as Linux has always served my needs for a UNIX-like operating system quite well. I feel like the BSDs are more pure and less messy than Linux, but is that actually true, or just my perception?

July 03, 2019

Super User Using a Console-only NetBSD VM

I am experimenting with NetBSD and seeing if I can get the Fenrir screenreader to run on it. However, I hit a snag post install; the console that I was using for the installation was working perfectly fine, however it stopped working alltogether once I completed the install. For reference, here is the line I used for virt-install:

virt-install --connect qemu:///system -n netbsd-testing \
             --ram 4096 --vcpus=8 \
             --cpu=host \
             -c /home/sektor/Downloads/boot-com.iso  \
             --os-type=netbsd --os-variant=netbsd8.0 \
             --disk=pool=devel,size=100,format=qcow2 \
             -w network=default --nographics 

When it asked me for the type of terminal I was using (this being the NetBSD install program), I accepted the default which was VT200. As I recall, I told it to use the BIOS for booting, and not any of the comm serial ports. Has anyone had any further experience with using no graphics on a Libvirt virtualized machine, and have any points as to how to get a working console?


June 29, 2019

NetBSD General on DaemonForums View X session of instance in VirtualBox via VNC
Does anyone have a working howto on how to attach X session on NetBSD running within VirtualBox to VNC on the host computer?

May 24, 2019

NetBSD Installation and Upgrading on DaemonForums no bootable device after installtion
After installing NetBSD 8 I have a couple problems.
1. If the USB drive with the installation image is not inserted the system will not boot.
2. Running X -configure causes a reboot.

1. Without the installation USB:

PXE-M0F: Exiting PXE ROM.
No bootable -- insert boot disk and press any key

The first time I thought I made a mistake and did something to the BIOS, but the partitions looks fine, just like it should in The Guide:

a:  0    472983    472984    FFSv2
b:  472984    476939    3985    swap
c:  0    476939    476939    NetBSD partition
d:  0    476939    476940    whole disc
e:  0    0    0    unused

I am at a bit of a loss, since as far as I know it should not be possible to set an installation medium as the boot source of an OS.

2. I do not know if this is unsupported hardware or related to #1.

DRM error in radeon_get_bios:
Unable to locate a BIOS ROM
radeon0: error: Fatal error during GPU init

I am trawlling through documrntation, but with a telephone. So I also cannot post a dmesg, although I can look through other threads where it is posted and copy it. (A little later in the day.)

March 15, 2019

Stack Overflow host netbsd 1.4 or 1.5 i386 cross-compile target macppc powerpc g3 program

For some reason, I want develop program which can work on netbsd 1.4 or 1.5 powerpc ,target cpu is power750(powerpc platform,nearly 20 years old system),but I can't find how to make this kind cross-compile enviroment vmware host:i386 netbsd 1.5 + egcs1.1.1 + binutils 2.9.1 ---> target host:macppc powerpc netbsd 1.5 + egcs 1.1.1 I download and install netbsd 1.5 vmware and download pkgsrc,when I make /usr/src/pkgsrc/cross/powerpc-netbsd,I got gcc work on i386 but not cross-gcc,why? Thank you if any help!

March 07, 2019

Amitai Schlair NYCBUG: Maintaining qmail in 2019

On Wednesday, March 6, I attended New York City BSD User Group and presented Maintaining qmail in 2019. This one pairs nicely with my recent DevOpsDays Ignite talk about why and how to Run Your @wn Email Server! That this particular “how” could be explained in 5 minutes is remarkable, if I may say so myself. In this NYCBUG talk — my first since 2014 — I show my work. It’s a real-world, open-source tale of methodically, incrementally reducing complexity in order to afford added functionality.

My abstract:

qmail 1.03 was notoriously bothersome to deploy. Twenty years later, for common use cases, I’ve finally made it pretty easy. If you want to try it out, I’ll help! (Don’t worry, it’s even easier to uninstall.) Or just listen as I share the sequence of stepwise improvements from then to now — including pkgsrc packaging, new code, and testing on lots of platforms — as well as the reasons I keep finding this project worthwhile.

Here’s the video:

January 25, 2019

Amitai Schlair DevOpsDays NYC: Run Your @wn Email Server!

In late January, I was at DevOpsDays NYC in midtown Manhattan to present Run Your @wn Email Server!

My abstract:

When we’re responsible for production, it can be hard to find room to learn. That’s why I run my own email server. It’s still “production” — if it stays down, that’s pretty bad — but I own all the decisions, take more risks, and have learned lots. And so can you! Come see why and how to get started.

With one command, install famously secure email software. A couple more and it’s running. A few more and it’s encrypted. Twiddle your DNS, watch the mail start coming in, and start feeling responsible for a production service in a way that web hosting can’t match.

January 07, 2019

Amitai Schlair 2018Q4 qmail updates in pkgsrc

Happy 2019! Another three months, another stable branch for pkgsrc, the practical cross-platform Unix package manager. I’ve shipped quite a few improvements for qmail users in our 2018Q4 release. In three sentences:

  1. qmail-run gains TLS, SPF, IPv6, SMTP recipient checks, and many other sensible defaults.
  2. Most qmail-related packages — including the new ones used by qmail-run — are available on most pkgsrc platforms.
  3. rc.d-boot starts rc.conf-enabled pkgsrc services at boot time on many platforms.

In one:

It’s probably easy for you to run qmail now.

On this basis, at my DevOpsDays NYC talk in a few weeks, I’ll be recommending that everyone try it.

Try it

Here’s a demo on CentOS 7, using binary packages:

The main command I ran:

$ sudo env PKG_RCD_SCRIPTS=yes pkgin -y install qmail-run rc.d-boot

Here’s another demo on Debian 9, building from source packages:

The commands I ran:

$ cd ...pkgsrc/mail/qmail-run && make PKG_RCD_SCRIPTS=yes install
$ cd ../../pkgtools/rc.d-boot && make PKG_RCD_SCRIPTS=yes install

These improvements were made possible by acceptutils, my redesigned TLS and SMTP AUTH implementation that obviates the need for several large and conflicting patches. Further improvements are expected.

Here’s the full changelog for qmail as packaged in pkgsrc-2018Q4.




September 15, 2018

Amitai Schlair Coding Tour Summer 2018: Conclusion

After my fourth and final tour stop, we decamped to Mallorca for a week. With no upcoming workshops to polish and no upcoming plans to finalize, the laptop stayed home. Just each other, a variety of beaches, and the annual Les Festes del Rei En Jaume that Bekki and I last saw two years ago on our honeymoon. The parade was perhaps a bit much for Taavi.

Looking away

The just-released episode 99 of Agile for Humans includes some reflections (starting around 50 minutes in) from partway through my coding tour. As our summer in Germany draws to a close, I’d like to reflect on the tour as a whole.

Annual training

I’ve made a habit of setting aside time, attention, and money each year for focused learning. My most recent trainings, all formative and memorable:

I hoped Schleier, Coding Tour would fit the bill for 2018. It has.

Geek joy

At the outset, I was asked how I’d know whether the tour had gone well. My response: “It’s a success if I get to meet a bunch of people in a bunch of places and we have fun programming together.”

I got to program with a bunch of people in a bunch of places. We had fun doing it. Success!

New technologies

My first tour stop offered such an ecumenical mix of languages, tools, and techniques that I began writing down each new technology I encountered. I’m glad I started at the beginning. Even so, this list of things that were new or mostly new to me is probably incomplete:

In the moment, learning new technologies was a source of geek joy. In the aggregate, it’s professionally useful. I think the weight clients tend to place on consultants needing to be expert in their tech stack is dangerously misplaced, but it doesn’t matter what I think if they won’t bring me in. Any chance for me to broaden my tech background is a chance for a future client to take advantage of all the other reasons I can be valuable to them.


As Schmonz’s Theorem predicts, code-touring is both similar to and different from consulting.

When consulting, I expect most of my learning to be meta: the second loop (at least) of double-loop learning. When touring, I became reacquainted with the simple joys of the first loop, spending all day learning new things to be able to do. It often felt like play.

When consulting, I initially find myself being listened to in a peculiar way, my words being heard and measured carefully for evidence of my real intentions. My first tasks are to demonstrate that I can be trusted and that I can be useful, not necessarily in that (or any) order. Accomplishing this as a programmer on tour felt easier than usual.

When I’m consulting, not everyone I encounter wants me there. Some offer time and attention because they feel obligated. On this tour, even though some folks were surprised to find out their employer wasn’t paying me anything, I sensed people were sharing their time and attention with me out of curiosity and generosity. I believe I succeeded in making myself trusted and useful to each of them, and the conversation videos and written testimonials help me hold the belief.

Professional development

With so much practice designing and facilitating group activities, so much information-rich feedback from participants, and so many chances to try again soon, I’ve leveled up as a facilitator. I was comfortable with my skills, abilities, and material before; I’m even more comfortable now. In my tour’s final public meetup, I facilitated one of my legacy code exercises for three simultaneous mobs. It went pretty well — in large part because of the participants, but also because of my continually developing skill at designing and facilitating learning experiences.

As a consultant, it’s a basic survival skill to quickly orient myself in new problem spaces. As a coach, my superpower might be that I help others quickly orient themselves in their problem spaces. Visiting many teams at many companies, I got lots of practice at both. These areas of strength for me are now stronger, the better with which to serve my next clients.

On several occasions I asked mobs not to bother explaining the current context to me before starting the timer. My hypothesis was, all the context I’d need would reveal itself through doing the work and asking a question or two along the way. (One basis among many for this hypothesis: what happened when I showed up late to one of Lennart Fridén’s sessions at this spring’s Mob Programming Conference and everyone else had already read the manual for our CPU.) I think there was one scenario where this didn’t work extremely well, but my memory’s fuzzy — have I mentioned meeting a whole bunch of people at a whole bunch of workplaces, meetups, and conferences? — so I’ll have to report the details when I rediscover it.

You can do this too, and I can help

When designing my tour, I sought advice from several people who’d gone on one. (Along the way I met several more, including Ivan Sanchez at SPA in London and Daniel Temme at SoCraTes in Soltau.)

If you’re wondering whether a coding tour is something you want to do, or how to make it happen, get in touch. I’m happy to listen and offer my suggestions.

What’s next for me, and you can help

Like what I’m doing? Want more of it in your workplace?

I offer short, targeted engagements in the New York metro area — coaching, consulting, and training — co-designed with you to meet your organization’s needs.

More at


Yes, lots.

It’s been a splendid set of privileges to have the free time to go on tour, to have organizations in several countries interested to have me code with them, and to meet so many people who care about what I care about when humans develop software together.

Five years ago I was discovering the existence of a set of communities of shared values in software development and my need to feel connected to them. Today I’m surer than ever that I’ve needed this connection and that I’ve found it.

Thanks to the people who hosted me for a week at their employer: Patrick Drechsler at MATHEMA/Redheads in Erlangen, Alex Schladebeck at BREDEX in Braunschweig, Barney Dellar at Canon Medical Research in Edinburgh, and Thorsten Brunzendorf at codecentric in Nürnberg and München. And thanks to these companies for being willing to take a chance on bringing in an itinerant programmer for a visit.

Thanks and apologies in equal measure to Richard Groß, who did all the legwork to have me visit MaibornWolff in Frankfurt, only to have me cancel at just about the last minute. At least we got to enjoy each other’s company at Agile Coach Camp Germany and SoCraTes (the only two people to attend both!).

Thanks to David Heath at the UK’s Government Digital Service for inviting me to join them on extremely short notice when I had a free day in London, and to Olaf Lewitz for making the connection.

Thanks to the meetups and conferences where I was invited to present: Mallorca Software Craft, SPA Software in Practice, pkgsrcCon, Hackerkegeln, JUG Ostfalen, Lean Agile Edinburgh, NEBytes, and Munich Software Craft. And thanks to Agile Coach Camp Germany and SoCraTes for the open spaces I did my part to fill.

Thanks to Marc Burgauer, Jens Schauder, and Jutta Eckstein for making time to join me for a meal. Thanks to Zeb Ford-Reitz, Barney Dellar, and their respective spice for inviting me into their respective homes for dinner.

Thanks to J.B. Rainsberger for simple, actionable advice on making it easy for European companies to reimburse my expenses, and more broadly on the logistics of going on European consulting-and-speaking tours when one is from elsewhere. (BTW, his next tour begins soon.)

Thanks all over again to everyone who helped me design and plan the tour, most notably Dr. Sal Freudenberg, Llewellyn Falco, and Nicole Rauch.

Thanks to Woody Zuill, Bryan Beecham, and Tim Bourguignon for that serendipitous conversation in the park in London. Thanks to Tim for having been there in the park with me. (No thanks to Woody for waiting till we’d left London before arriving. At least David Heath and GDS got to see him. Hmph.)

Thanks to Lisi Hocke for making my wish a reality: that her testing tour and my coding tour would intersect. As a developer, I have so much to learn about testing and so few chances to learn from the best. She made it happen. A perfect ending for my tour.

Thanks to Ryan Ripley for having me on Agile for Humans a couple more times as the tour progressed. I can’t say enough about what Ryan and his show have done for me, so this’ll have to be enough.

Thanks to everyone else who helped draw special attention to my tour when I was seeking companies to visit, most notably Kent Beck. It really did help.

Another reason companies cited for inviting me: my micropodcast, Agile in 3 Minutes. Thanks to Johanna Rothman, Andrea Goulet, Lanette Creamer, Alex Harms, and Jessica Kerr for your wonderful guest episodes. You’ve done me and our listeners a kindness. I trust it will come back to you.

Thank you to my family for supporting my attempts at growth, especially when I so clearly need it.

Finally, thanks to all of you for following along and for helping me find the kind of consulting work I’m best at, close to home in New York. You can count on me continuing to learn things and continuing to share them with you.


March 17, 2018

Hubert Feyrer The adventure of rebuilding g4u from source
I was asked by a long-time g4u user on help with rebuilding g4u from sources. After pointing at the instructions on the homepage, we figured out that a few lose odds and ends didin't match. After bouncing some advices back and forth, I ventured into the frabjous joy of starting a rebuild from scratch, and quick enough ran into some problems, too.

Usually I cross-compile g4u from Mac OS X, but for the fun of it I did it on NetBSD (7.0-stable branch, amd64 architecture in VMware Fusion) this time. After waiting forever on the CVS checkout, I found that empty directories were not removed - that's what you get if you have -P in your ~/.cvsrc file.

I already had the hint that the "g4u-build" script needed a change to have "G4U_BUILD_KERNEL=true".

From there, things went almost smooth: building indicated a few files that popped up "variable may be used uninitialized" errors, and which -- thanks to -Werror -- bombed out the build. Fixing was easy, and I have no idea why that built for me on the release. I have sent a patch with the required changes to the g4u-help mailing list. (After fixing that I apparently got unsubscribed from my own support mailing list - thank you very much, Sourceforge ;)).

After those little hassles, the build worked fine, and gave me the floppy disk and ISO images that I expected:

>       ls -l `pwd`/g4u*fs
>       -rw-r--r--  2 feyrer  staff  1474560 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u1.fs
>       -rw-r--r--  2 feyrer  staff  1474560 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u2.fs
>       -rw-r--r--  2 feyrer  staff  1474560 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u3.fs
>       -rw-r--r--  2 feyrer  staff  1474560 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u4.fs
>       ls -l `pwd`/g4u.iso
>       -rw-r--r--  2 feyrer  staff  6567936 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u.iso
>       ls -l `pwd`/g4u-kernel.gz
>       -rw-r?r--  1 feyrer  staff  6035680 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u-kernel.gz 
Next steps are to confirm the above changes as working from my faithful tester, and then look into how to merge this into the build instructions .

January 12, 2018

Super User What is the default File System in NetBSD? What are it's benefits and shortcommings?

I spent some time looking through the documentation, but honestly, I have not found any good answer.

I understand NetBSD supports many FS types as USER SPACE, but I would like to know what is the default FS created by the installer, and the one which I could boot from.

January 04, 2018

Hubert Feyrer NetBSD 7.1.1 released
On December 22nd, NetBSD 7.1.1 was released as premature christmas present, see the release annoucement.

NetBSD 7.1.1 is the first update with security and critical fixes for the NetBSD 7.1 branch. Those include a number of fixes for security advisories, kernel and userland.

Hubert Feyrer New year, new security advisories!
So things have become a bit silent here, which is due to reallife - my apologies. Still, I'd like to wish everyone following this here a Happy New Year 2018! And with this, a few new security advisories have been published:
Hubert Feyrer 34C3 talk: Are all BSDs created equally?
I haven't seen this mentioned on the NetBSD mailing lists, and this may be of interest to some - there was a talk about security bugs in the various BSDs at the 34th Chaos Communication Congress:

In summary, many reasons for bugs are shown in many areas of the kernel (system calls, file systems, network stack, compat layer, ...), and what has happened after they were made known to the projects.

As a hint, NetBSD still has a number of Security Advisories to publish, it seems. Anyone wants to help out the security team? :-)

June 22, 2017

Server Fault How to log ssh client connection/command?

I would like to know how i could log SSH command lines a user is using on a server. For exemple, if the user Alex on my server is doing the following set of commands :

$ cd /tmp
$ touch myfile
$ ssh [email protected]
$ ssh [email protected]
$ vim anotherfile
$ ssh [email protected]

I would like to log the ssh commands used on the server in a file which looks like :

[2014-07-25 10:10:10] Alex : ssh [email protected]
[2014-07-25 10:18:20] Alex : ssh [email protected]
[2014-07-25 11:15:10] Alex : ssh [email protected]

I don't care what he did during his ssh session, i just want to know WHEN and TO WHERE he made a connection to another server.

The user is not using bash and i would like to avoid manipulating .bash_history anyway as the user can modify it.

Any clue on this ?

Thank you :)

edit : to be more specific :

a user connects to a server A and then connects from the server A to server B. I want to track down to which server he connects through ssh from server A.

June 08, 2017

Hubert Feyrer g4u 2.6 released
After a five-year period for beta-testing and updating, I have finally released g4u 2.6. With its origins in 1999, I'd like to say: Happy 18th Birthday, g4u!

About g4u: g4u ("ghosting for unix") is a NetBSD-based bootfloppy/CD-ROM that allows easy cloning of PC harddisks to deploy a common setup on a number of PCs using FTP. The floppy/CD offers two functions. The first is to upload the compressed image of a local harddisk to a FTP server, the other is to restore that image via FTP, uncompress it and write it back to disk. Network configuration is fetched via DHCP. As the harddisk is processed as an image, any filesystem and operating system can be deployed using g4u. Easy cloning of local disks as well as partitions is also supported.

The past: When I started g4u, I had the task to install a number of lab machines with a dual-boot of Windows NT and NetBSD. The hype was about Microsoft's "Zero Administration Kit" (ZAK) then, but that did barely work for the Windows part - file transfers were slow, depended on the clients' hardware a lot (requiring fiddling with MS DOS network driver disks), and on the ZAK server the files for installing happened do disappear for no good reason every now and then. Not working well, and leaving out NetBSD (and everything elase), I created g4u. This gave me the (relative) pain of getting things working once, but with the option to easily add network drivers as they appeared in NetBSD (and oh they did!), plus allowed me to install any operating system.

The present: We've used g4u successfully in our labs then, booting from CDROM. I also got many donations from public and private instituations plus comanies from many sectors, indicating that g4u does make a difference.

In the mean time, the world has changed, and CDROMs aren't used that much any more. Network boot and USB sticks are today's devices of choice, cloning of a full disk without knowing its structure has both advantages but also disadvantages, and g4u's user interface is still command-line based with not much space for automation. For storage, FTP servers are nice and fast, but alternatives like SSH/SFTP, NFS, iSCSI and SMB for remote storage plus local storage (back to fun with filesystems, anyone? avoiding this was why g4u was created in the first place!) should be considered these days. Further aspects include integrity (checksums), confidentiality (encryption). This leaves a number of open points to address either by future releases, or by other products.

The future: At this point, my time budget for g4u is very limited. I welcome people to contribute to g4u - g4u is Open Source for a reason. Feel free to get back to me for any changes that you want to contribute!

The changes: Major changes in g4u 2.6 include:

The software: Please see the g4u homepage's download section on how to get and use g4u.