NetBSD Planet

June 26, 2019

Roy Marples dhcpcd-ui-0.7.6 released

It's been a very long time since the last release - over 2.5 years! It's seen a lot of changes since then, but mainly minor improvements here and there. Some of the important changes are:

NetBSD Blog Adapting TriforceAFL for NetBSD, Part 1
Prepared by Akul Pillai as part of GSoC 2019.

The first coding period of The Google Summer of Code has come to an end. It has been a great experience so far and I got the opportunity to learn a lot of new stuff. This is a report on the work I have during this coding period.

About TriforceAFL

TriforceAFL is a modified version of AFL that supports fuzzing using QEMU's full system emulation. This offers several advantages such as the fact that pieces of the kernel need not be recompiled with AFL or that the kernel does not need to be built with coverage support. More details on other advantages, design and implementation of TriforceAFL can be found here.

The TriforceLinuxSyscallFuzzer and the TriforceOpenBSDSyscallFuzzer are syscall fuzzers built on top of TriforceAFL. The end goal of this project is to adapt TriforceAFL for NetBSD syscall fuzzing.

Adapted TriforceAFL for pkgsrc-wip

One of the end goals of the project is to make the fuzzer available as a pkgsrc package. To do so, TriforceAFL had to be first ported to pkgsrc. TriforceAFL uses qemu, so the appropriate patches to qemu for NetBSD were applied and few other minor issues resolved. The working package is now available in pkgsrc-wip.

The NetBSD Syscall Fuzzer

TriforceNetBSDSyscallFuzzer can be now used to perform system call fuzzing of NetBSD kernels using AFL and QEMU. The setup scripts and the driver program are functioning. The syscalls list has been updated for NetBSD and basic input generation works. Documentation detailing the process of setup(of the NetBSD installation/kernel image), building and fuzzing along with the code is available on github.

The fuzzer functions properly and detects crashes which can be reproduced using the driver. Although it can severely benefit from better input generation and optimisation. This will be the focus in the next coding period.


In the coming weeks, the work of optimizing the fuzzer is to be done, so that it is more efficient at catching bugs, I am looking forward to doing so and making TriforceNetBSDSyscallFuzzer available on NetBSD through pkgsrc.

Lastly I would like to thank my mentor, Kamil Rytarowski for always helping me through the process and guiding me whenever I needed any help.

Roy Marples dhcpcd-7.2.3 released

Minor update with the following changes:

June 22, 2019

DragonFly BSD Digest In Other BSDs for 2019/06/22

Done at the last minute!

June 15, 2019

DragonFly BSD Digest In Other BSDs for 2019/06/15

I linked to some other BSD link roundups, so the joy of clicking can continue.

June 12, 2019

Super User NetBSD - no pkg

After full installation of latest NetBSD I'm tried to launch pkgin, but received pkgin not found, also I've got same for pkgsrc. Then I've found, that there's no /usr/pkg location.

That's normal or I've did something wrong?

June 08, 2019

DragonFly BSD Digest In Other BSDs for 2019/06/08

I was a bit low on BSD links last week; not so this week.

June 05, 2019

NetBSD Blog XSAVE and compat32 kernel work for LLDB

Upstream describes LLDB as a next generation, high-performance debugger. It is built on top of LLVM/Clang toolchain, and features great integration with it. At the moment, it primarily supports debugging C, C++ and ObjC code, and there is interest in extending it to more languages.

In February, I have started working on LLDB, as contracted by the NetBSD Foundation. So far I've been working on reenabling continuous integration, squashing bugs, improving NetBSD core file support and lately extending NetBSD's ptrace interface to cover more register types. You can read more about that in my Apr 2019 report.

In May, I was primarily continuing the work on new ptrace interface. Besides that, I've found and fixed a bug in ptrace() compat32 code, pushed LLVM buildbot to ‘green’ status and found some upstream LLVM regressions. More below.

Buildbot status update

Traditionally, let's start with buildbot updates. The buildbot is providing continuous integration for a number of LLVM projects on NetBSD, including LLDB, clang and clang's runtime libraries. Its available at:

Previously, the most significant problem in using buildbot was flakiness of LLDB tests which resulted in frequent false positives. I was able to finally reduce this effect through reducing the number of parallel test runs for LLDB tests. To avoid slowing down other test suites, I have used a sed hack that overrides job count directly in the specific lit invocation.

Additionally, I have fixed a few regressions during the period, notably:

  • worked around missing nexttowardl() in NetBSD 8 causing libc++ test failure, by using std::nextafter() in the problematic test: r360673,

  • fixed compiler path test to work correctly without specific linker being available: r360761,

  • fixed inferring source paths in libunwind that prevented the tests from finding libc++: r361931,

  • removed test case that relied on read() attempt from a directory producing very specific error message: r362404 (NB: this failed because NetBSD permits reading from directory descriptors).

Those fixes permitted the buildbot to become green for a short period of time. Sadly, shortly afterwards one of AMDGPU tests started failing and we are still trying to find the cause.

Adding register read/write tests to ATF tests

Last month, I have implemented a number of register reading/writing tests for LLDB. This month I've introduced matching tests inside NetBSD's ATF test suite. This provides the ability to test NetBSD's ptrace implementation directly on the large variety of platforms and kernels supported by NetBSD. With the dynamic development of NetBSD, running LLDB tests everywhere would not be feasible.

While porting the tests, I've made a number of improvements, some of them requested specifically by LLDB upstream. Those include:

  • starting to use better input/output operands for assembly, effectively reducing the number of direct register references and redundant code: r359978,

  • using more readable/predictable constants for register data, read part: r360041, write part: r360154,

  • using %0 and %1 operands to reference memory portably between i386 and amd64: r360148.

The relevant NetBSD commits for added tests are (using the git mirror):

While working on this, I've also noticed that struct fpreg and struct xmmregs are not fully specified on i386. In bbc3f184d470, I've added the fields needed to make use of those structures convenient.

Fixing compat32: request mapping and debug registers

Kamil has asked me to look into PR#54233 indicating problems with 32-bit application debugging on amd64. While the problem in question most likely combines multiple issues, one specifically related to my work was missing PT_*DBREGS support in compat32.

While working on this, I've found out that the functions responsible for implementing those requests were not called at all. After investigating, I've came to the following conclusion. The i386 userland code has passed PT_* request codes corresponding to i386 headers to the compat32 layer. The compat32 layer has passed those codes unmodified to the common kernel code and compared them to PT_* constants available in kernel code which happened to be amd64 constants.

This worked fine for low requests numbers that happened to match on both architectures. However, i386 adds two additional requests (PT_*XMMREGS) after PT_SETFPREGS, and therefore all remaining requests are offset.

To solve this, I've created a request code mapping function that converts i386 codes coming from userland to the matching amd64 values used in the kernel. For the time being, this supports only requests common to both architectures, and therefore PT_*XMMREGS can't be implemented without further hacking it.

Once I've managed to fix compat32, I went ahead to implement PT_*DBREGS in compat32. Kamil has made an initial implementation in the past but it was commented out and lacked input verification. However, I've chosen to change the implementation a bit and reuse x86_dbregs_read() and x86_dbregs_write() functions rather than altering pcb directly. I've also added the needed value checks for PT_SETDBREGS.

Both changes were committed to /usr/src:

Initial XSAVE work

In the previous report, I have been considering which approach to take in order to provide access to the additional FPU registers via ptrace. Eventually, the approach to expose the raw contents of XSAVE area got the blessing, and I've started implementing it.

However, this approach proved impractical. The XSAVE area in standard format (which we are using) consists of three parts: FXSAVE-compatible legacy area, XSAVE header and zero or more extended components. The offsets of those extended components turned out to be unpredictable and potentially differing between various CPUs. The architecture developer's manual indicates that the relevant offsets can be obtained using CPUID calls.

Apparently both Linux and FreeBSD did not take this into consideration when implementing their API, and they effectively require the caller to issue CPUID calls directly. While such an approach could be doable in NetBSD, it would prevent core dumps from working correctly on a different CPU. Therefore, it would be necessary to perform the calls in kernel instead, and include the results along with XSAVE data.

However, I believe that doing so would introduce unnecessary complexity for no clear gain. Therefore, I proposed two alternative solutions. They were to either:

  1. copy XSAVE data into custom structure with predictable indices, or

  2. implement separate PT_* requests for each component group, with separate data structure each.

Comparison of the two proposed solutions

Both solutions are roughly equivalent. The main difference between them is that the first solution covers all extended registers (and is future-extensible) in one request call, while the second one requires new pair of requests for each new register set.

I personally prefer the former solution because it reduces the number of ptrace calls needed to perform typical operations. This is especially relevant when reading registers whose contents are split between multiple components: YMM registers (whose lower bits are in SSE area), and lower ZMM registers (whose lower bits are YMM registers).

Example code reading the ZMM register using a single request solution would look like:

struct xstate xst;
struct iovec iov;
char zmm_reg[64];

iov.iov_base = &xst;
iov.iov_len = sizeof(xst);

ptrace(PT_GETXSTATE, child_pid, &iov, 0);

// verify that all necessary components are available
assert(xst.xs_xstate_bv & XCR0_SSE);
assert(xst.xs_xstate_bv & XCR0_YMM_Hi128);
assert(xst.xs_xstate_bv & XCR0_ZMM_Hi256);

// combine the values
memcpy(&zmm_reg[0], &xst.xs_fxsave.fx_xmm[0], 16);
memcpy(&zmm_reg[16], &xst.xs_ymm_hi128.xs_ymm[0], 16);
memcpy(&zmm_reg[32], &xst.xs_zmm_hi256.xs_zmm[0], 32);

For comparison, the equivalent code for the other variant would roughly be:

#if defined(__x86_64__)
struct fpreg fpr;
struct xmmregs fpr;
struct ymmregs ymmr;
struct zmmregs zmmr;
char zmm_reg[64];

#if defined(__x86_64__)
ptrace(PT_GETFPREGS, child_pid, &fpr, 0);
ptrace(PT_GETXMMREGS, child_pid, &fpr, 0);
ptrace(PT_GETYMMREGS, child_pid, &ymmr, 0);
ptrace(PT_GETZMMREGS, child_pid, &zmmr, 0);

memcpy(&zmm_reg[0], &fpr.fxstate.fx_xmm[0], 16);
memcpy(&zmm_reg[16], &ymmr.xs_ymm_hi128.xs_ymm[0], 16);
memcpy(&zmm_reg[32], &zmmr.xs_zmm_hi256.xs_zmm[0], 32);

I've submitted a patch set implementing the first solution, as it was easier to convert to from the initial approach. If the feedback indicates the preference of the other solution, a conversion to it should also be easier to the other way around. It is available on tech-kern mailing list: [PATCH 0/2] PT_{GET,SET}XSTATE implementation, WIP v1.

The initial implementation should support getting and setting x87, SSE, AVX and AVX-512 registers (i.e. all types currently enabled in the kernel). The tests cover all but AVX-512. I have tested it on native amd64 and i386, and via compat32.

Future plans

The most immediate goal is to finish the work on XSAVE. This includes responding to any feedback received, finding AVX-512 hardware to test it on, writing tests for AVX-512 registers and eventually committing the patches to the NetBSD kernel. Once this is done, I need to extend XSAVE support into core dumps, and implement userland-side of both into LLDB.

Besides that, the next items on TODO are:

  1. Adding support for debug registers (moved from last month's TODO).

  2. Adding support to backtrace through signal trampoline.

  3. Working on i386 and aarch64 LLDB port.

In the meantime, Kamil's going to continue working on improving fork and thread support kernel-side, preparing it for my work LLDB-side.

This work is sponsored by The NetBSD Foundation

The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue funding projects and services to the open-source community. Please consider visiting the following URL to chip in what you can:

NetBSD Blog NetBSD 8.1 available

The NetBSD Project is pleased to announce NetBSD 8.1, the first feature and stability maintenance release of the netbsd-8 stable branch.

Besides the workarounds for the latest CPU specific vulnerabilities, this also includes many bug fixes and a few selected new drivers. For more details and instructions see the 8.1 announcement.

Get NetBSD 8.1 from our CDN (provided by fastly) or one of the ftp mirrors.

Complete source and binaries for NetBSD are available for download at many sites around the world. A list of download sites providing FTP, AnonCVS, and other services may be found at

NetBSD Blog Validation and improvements of debugging interfaces
In the past month, I have introduced correctness and reliability of tracing processes in the kernel codebase.
I took part in BSDCan 2019 and during the event wrote a NetBSD version of truss, a ptrace(2)-powered syscall tracing utility from FreeBSD. I've finished the port after getting back home and published it to the NetBSD community. This work allowed me to validate the ptrace(2) interfaces in another application and catch new problems that affect every ptrace(2)-based debugger.

Changes in the basesystem

A number of modifications were introduced to the main NetBSD repository which include ptrace(2), kernel sanitizer, and kernel coverage.

Improvements in the ptrace(2) code:

I mentor two GSoC projects and support community members working on kernel fuzzers. As a result of this work, fixes are landing in the distribution source code from time to time. We have achieved an important milestone with being able to run NetBSD/amd64 on real hardware with Kernel Undefined Behavior Sanitizer without any reports from boot and execution of ATF tests. This achievement will allow us to enable kUBSan reports as fatal ones and use in the fuzzing process, capturing bugs such as integer overflow, out of bounds array use, invalid shifts, etc.

I have also introduced a support of a new sysctl(3) operation: KERN_PROC_CWD. This call retrieves the current working directory of a specified process. This operation is used typically in terminal-related applications (tmux, terminator, ...) and generic process status prompters (Python psutils). I have found out that my terminal emulator after system upgrade doesn't work correctly and it uses a fallback of resolving the symbolic link of /proc/*/cwd.

ptrace(2) utilities

In order to more easily validate the kernel interfaces I wrote a number of utility programs that use the ptrace(2) functionality. New programs reuse the previously announced picotrace ( code as a framework.

Some programs have been already published, other are still in progress and kept locally on my disk only. New programs that have been published so far:

More elaborated introduction with examples is documented on the GitHub repository. I use these programs for profit as they save my precious time on debugging issues in programs and as validators for kernel interfaces as they are working closely over the kernel APIs (contrary to large code base of GDB or LLDB). I've already detected and verified various kernel ptrace(2) issues with these programs. With a debugger it's always unclear whether a problem is in the debugger, its libraries or the kernel.

Among the detected issues, the notable list of them is as follows:

Update in Problem Report entries

I've reiterated over reported bugs in the gnats tracking system. These problems were still valid in the beginning of this year and are now reported to be gone:


There are still sometimes new bugs reported in ptrace(2) or GDB, but they are usually against racy ATF test and complex programs. I can also observe a difference that simple and moderately complex programs usually work now and the reports are for heavy ones like firefox (multiple threads and multiple processes).

I estimate that there are still at least 3 critical threading issues to be resolved, races and use-after-free scenaris in vfork(2) and a dozen of other more generic problems typically in signal routing semantics.

Plan for the next milestone

Cover with regression tests the posix_spawn(2) interface, if needed correct the kernel code. This will be followed by resolving the use-after-free scenarios in vfork(2). This is expected to accomplish the tasks related to forking code.

My next goal on the roadmap is to return to LWP (threading) code and fix all currently known problems.

Independently I will keep supporting the work on kernel fuzzing projects within the GSoC context.

This work was sponsored by The NetBSD Foundation.

The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue funding projects and services to the open-source community. Please consider visiting the following URL to chip in what you can:

June 01, 2019

DragonFly BSD Digest In Other BSDs for 2019/06/01

I have an inexplicably short BSD section this week.  Post-BSDCan fatigue?  I don’t know, but I have lots on the docket for tomorrow, I promise.
 New Developers in May 2019

May 31, 2019 NetBSD 8.1 released

May 25, 2019

DragonFly BSD Digest In Other BSDs for 2019/05/25

Watch for post-BSDCan slides and material and ideas to show up in the next few weeks.


May 24, 2019

NetBSD Installation and Upgrading on DaemonForums no bootable device after installtion
After installing NetBSD 8 I have a couple problems.
1. If the USB drive with the installation image is not inserted the system will not boot.
2. Running X -configure causes a reboot.

1. Without the installation USB:

PXE-M0F: Exiting PXE ROM.
No bootable -- insert boot disk and press any key

The first time I thought I made a mistake and did something to the BIOS, but the partitions looks fine, just like it should in The Guide:

a:  0    472983    472984    FFSv2
b:  472984    476939    3985    swap
c:  0    476939    476939    NetBSD partition
d:  0    476939    476940    whole disc
e:  0    0    0    unused

I am at a bit of a loss, since as far as I know it should not be possible to set an installation medium as the boot source of an OS.

2. I do not know if this is unsupported hardware or related to #1.

DRM error in radeon_get_bios:
Unable to locate a BIOS ROM
radeon0: error: Fatal error during GPU init

I am trawlling through documrntation, but with a telephone. So I also cannot post a dmesg, although I can look through other threads where it is posted and copy it. (A little later in the day.)

May 20, 2019

NetBSD Blog NetBSD 8.1 Release Candidate 1

The NetBSD Project is pleased to announce NetBSD 8.1 RC1, the first (and hopefully final) release candidate for the upcoming NetBSD 8.1 release.

Over the last year, many changes have been made to the NetBSD 8 stable branch. As a stable branch the release engineering team and the NetBSD developers are conservative with changes to this branch and many users rely on the binaries from our regular auto-builds for production use. Now it is high time to cut a formal release, right before we go into the next release cycle with the upcoming branch for NetBSD 9.

Besides the workarounds for the latest CPU specific vulnerabilities, this also includes many bug fixes and a few selected new drivers. For more details and instructions see the 8.1 RC1 announcement.

Get NetBSD 8.1 RC1 from our CDN (provided by fastly) or one of the ftp mirrors.

Complete source and binaries for NetBSD are available for download at many sites around the world. A list of download sites providing FTP, AnonCVS, and other services may be found at

Please test RC1, we are looking forward to your feedback. Please send-pr any bugs or mail us at releng at for more general comments. NetBSD 8.1_RC1 binaries available

May 04, 2019

Roy Marples dhcpcd-7.2.2 released

dhcpcd-7.2.2 has been released with the following fixes:

This security issue has been addressed


Patch for dhcpcd-7 if you don't want to upgrade to dhcpcd-7.2.2:

dhcpcd-6.11.7 has been released as well, with this in. I have no plans to fix earlier versions, heck you shouldn't even be using dhcpcd-6!

Many thanks to Maxime Villard [email protected] for discovering this issue.


May 02, 2019 New Security Advisories: NetBSD-SA2019-00[2-3]

May 01, 2019 New Developer in April 2019

April 26, 2019

Roy Marples dhcpcd-7.2.1 released

dhcpcd-7.2.1 has been released with the following changes:

These security issues are also addressed:

Especially if you are using dhcpcd-7

Patch for dhcpcd-7 if you don't want to upgrade to dhcpcd-7.2.1:

dhcpcd-6.11.6 has been released as well, with the two applicable fixes in. I have no plans to fix earlier versions, heck you shouldn't even be using dhcpcd-6!

Many thanks to Maxime Villard [email protected] for discovering these issues.


April 18, 2019

Unix Stack Exchange Automatic install NetBSD ISO

Is there some form of automated NetBSD installation?

I would like to automate this manual process:

But I found no examples.

April 17, 2019

Roy Marples dhcpcd-7.2.0 released

dhcpcd-7.2.0 has been released with the following changes of note:

Sorry for the longer delay than normal in getting this release out. Anwyay, this is likely the last feature release from the -7 branch. Just minor bug fixes and any security issues from this point. A dhcpcd-7 branch has now been created for maintainance.


April 10, 2019

Unix Stack Exchange NetBSD desktop install

I've had NetBSD 7 installed for several years, running MariaDB 5.5.52 and gcc 4.8.4 and Geany. I would like to install a slightly more advanced desktop than XWindows.

When I cd into /usr/pkgsrc/meta-pkgs/lxdew or xfce4 and do

make install clean 

I get the same error msg for both.

Conflicting PLIST with glib1-1.48.2: bin/glib-genmarshal

How do I resolve this conflict? Can I overwrite the existing glib file without harm to MariaDB or GCC? If so, how?

March 15, 2019

Stack Overflow host netbsd 1.4 or 1.5 i386 cross-compile target macppc powerpc g3 program

For some reason, I want develop program which can work on netbsd 1.4 or 1.5 powerpc ,target cpu is power750(powerpc platform,nearly 20 years old system),but I can't find how to make this kind cross-compile enviroment vmware host:i386 netbsd 1.5 + egcs1.1.1 + binutils 2.9.1 ---> target host:macppc powerpc netbsd 1.5 + egcs 1.1.1 I download and install netbsd 1.5 vmware and download pkgsrc,when I make /usr/src/pkgsrc/cross/powerpc-netbsd,I got gcc work on i386 but not cross-gcc,why? Thank you if any help!

March 07, 2019

Amitai Schlair NYCBUG: Maintaining qmail in 2019

On Wednesday, March 6, I attended New York City BSD User Group and presented Maintaining qmail in 2019. This one pairs nicely with my recent DevOpsDays Ignite talk about why and how to Run Your @wn Email Server! That this particular “how” could be explained in 5 minutes is remarkable, if I may say so myself. In this NYCBUG talk — my first since 2014 — I show my work. It’s a real-world, open-source tale of methodically, incrementally reducing complexity in order to afford added functionality.

My abstract:

qmail 1.03 was notoriously bothersome to deploy. Twenty years later, for common use cases, I’ve finally made it pretty easy. If you want to try it out, I’ll help! (Don’t worry, it’s even easier to uninstall.) Or just listen as I share the sequence of stepwise improvements from then to now — including pkgsrc packaging, new code, and testing on lots of platforms — as well as the reasons I keep finding this project worthwhile.

Here’s the video:

February 23, 2019

Stack Overflow How to perform a 308 open redirect with php and apache?

I want to perform an open redirect so,

would redirect to
Here’s /index.cgi which of course has exec permissions.

header("Location: ".$_GET["endpoint"], true, 307);

and Here’s /flashredirect/.htaccess

Options FollowSymLinks
Options +ExecCGI
AddHandler cgi-script .cgi .pl
RewriteEngine On
RewriteBase /
FallbackResource /index.cgi

Obviously, there’s an error somewhere but where ? Also accessing error logs is payfull on so I can’t know the problem.

February 18, 2019

Stack Overflow NetBSD long double trouble

I have simple code:

 #include <stdio.h>

 int main()
      //char d[10] = {0x13, 0x43, 0x9b, 0x64, 0x28, 0xf8, 0xff, 0x7f, 0x00, 0x00};
      //long double rd = *(long double*)&d;
      long double rd = 3.3621e-4932L;
      printf("%Le\n", rd);
      return 0;

On my Ubuntu x64 it prints as expected 3.362100e-4932. On my NetBSD it prints 1.681050e-4932

Why it happens and how can I fix it? I try clang and gcc with same result.

My system (VM inside VirtualBox 5.0):

 uname -a
 NetBSD netbsd.home 7.0 NetBSD 7.0 (GENERIC.201509250726Z) amd64

 gcc --version
 gcc (nb2 20150115) 4.8.4

 clang --version
 clang version 3.6.2 (tags/RELEASE_362/final)
 Target: x86_64--netbsd
 Thread model: posix


/usr/include/x86/float.h defines as LDBL_MIN as 3.3621031431120935063E-4932L And this value is greater than printf result.

February 06, 2019

Stack Overflow Disabling/Enabling interrupts on x86 architectures

I am using NetBSD 5.1 for x86 systems. While studying some driver related code, I see that we use splraise and spllower to block or allow interrupts. I searched some of the mechanisms on internet to understand how these mechanisms work in reality. Did not get any real info on that.

When I disassembled I got the mechanism but still do not understand how all these assembly instruction yield me the result. I know x86 instruction individually, but not how the whole stuff works in its entirety.

Need your help in understanding its principles for x86 system. I understand that we need to disable Interrupt Enable (IE) bit, but this assembly seems to be doing more than just this work. Need help.

  (gdb) x/50i splraise
   0xc0100d40:  mov    0x4(%esp),%edx
   0xc0100d44:  mov    %fs:0x214,%eax
   0xc0100d4a:  cmp    %edx,%eax
   0xc0100d4c:  ja     0xc0100d55
   0xc0100d4e:  mov    %edx,%fs:0x214
   0xc0100d55:  ret
   0xc0100d56:  lea    0x0(%esi),%esi
   0xc0100d59:  lea    0x0(%edi,%eiz,1),%edi
   (gdb) p spllower
   $38 = {<text variable, no debug info>} 0xc0100d60
   0xc0100d60:  mov    0x4(%esp),%ecx
   0xc0100d64:  mov    %fs:0x214,%edx
   0xc0100d6b:  cmp    %edx,%ecx
   0xc0100d6d:  push   %ebx
   0xc0100d6e:  jae,pn 0xc0100d8f
   0xc0100d71:  mov    %fs:0x210,%eax
   0xc0100d77:  test   %eax,%fs:0x244(,%ecx,4)
   0xc0100d7f:  mov    %eax,%ebx
   0xc0100d81:  jne,pn 0xc0100d91
   0xc0100d84:  cmpxchg8b %fs:0x210
   0xc0100d8c:  jne,pn 0xc0100d71
   0xc0100d8f:  pop    %ebx
   0xc0100d90:  ret
   0xc0100d91:  pop    %ebx
   0xc0100d92:  jmp    0xc0100df0
   0xc0100d97:  mov    %esi,%esi
   0xc0100d99:  lea    0x0(%edi,%eiz,1),%edi
   0xc0100da0:  mov    0x4(%esp),%ecx
   0xc0100da4:  mov    %fs:0x214,%edx
   0xc0100dab:  cmp    %edx,%ecx
   0xc0100dad:  push   %ebx
   0xc0100dae:  jae,pn 0xc0100dcf
   0xc0100db1:  mov    %fs:0x210,%eax
   0xc0100db7:  test   %eax,%fs:0x244(,%ecx,4)
   0xc0100dbf:  mov    %eax,%ebx
   0xc0100dc1:  jne,pn 0xc0100dd1
   0xc0100dc4:  cmpxchg8b %fs:0x210
   0xc0100dcc:  jne,pn 0xc0100db1
   0xc0100dcf:  pop    %ebx
   0xc0100dd0:  ret
   0xc0100dd1:  pop    %ebx
   0xc0100dd2:  jmp    0xc0100df0
   0xc0100dd7:  mov    %esi,%esi
   0xc0100dd9:  lea    0x0(%edi,%eiz,1),%edi
   0xc0100de0:  nop
   0xc0100de1:  jmp    0xc0100df0

The code seems to be using a helper function cx8_spllower starting at address 0xc0100da0.

Unix Stack Exchange Setting wallpaper in NetBSD JWM

I have installed "NetBSD JWM" in the virtual environment of VMware Workstation 14 Pro. I'd like to set wallpaper but I do not know how to do it.

January 28, 2019

Stack Overflow configuration of tty on BSD system

For a command like this one on Linux debian-linux 4.19.0-1-amd64 #1 SMP Debian 4.19.12-1 (2018-12-22) x86_64 GNU/Linux with xfce I get :

[email protected]:~$ dbus-send --system --type=method_call --print-reply --dest
=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListActivatable  

The same command on OpenBSD LeOpenBSD 6.4 GENERIC.MP#364 amd64 with xfce I get :

ktop/DBus org.freedesktop.DBus.ListActivatableNames   <

On linux, at the end of screen, we go to next line.
On BSD(OpenBSD-NetBSD), the command line continue on the same line and the first words disapear.
It's the same in xfce-terminal-emulator, xterm or in TTY (Alt-Ctrl-F3)

I try to add am in gettytab in the defaut section with no avail.
Termcap man page say :
If the display wraps around to the beginning of the next line when the cursor reaches the right margin, then it should have the am capability.
What can I do ?

January 25, 2019

Amitai Schlair DevOpsDays NYC: Run Your @wn Email Server!

In late January, I was at DevOpsDays NYC in midtown Manhattan to present Run Your @wn Email Server!

My abstract:

When we’re responsible for production, it can be hard to find room to learn. That’s why I run my own email server. It’s still “production” — if it stays down, that’s pretty bad — but I own all the decisions, take more risks, and have learned lots. And so can you! Come see why and how to get started.

With one command, install famously secure email software. A couple more and it’s running. A few more and it’s encrypted. Twiddle your DNS, watch the mail start coming in, and start feeling responsible for a production service in a way that web hosting can’t match.

January 07, 2019

Amitai Schlair 2018Q4 qmail updates in pkgsrc

Happy 2019! Another three months, another stable branch for pkgsrc, the practical cross-platform Unix package manager. I’ve shipped quite a few improvements for qmail users in our 2018Q4 release. In three sentences:

  1. qmail-run gains TLS, SPF, IPv6, SMTP recipient checks, and many other sensible defaults.
  2. Most qmail-related packages — including the new ones used by qmail-run — are available on most pkgsrc platforms.
  3. rc.d-boot starts rc.conf-enabled pkgsrc services at boot time on many platforms.

In one:

It’s probably easy for you to run qmail now.

On this basis, at my DevOpsDays NYC talk in a few weeks, I’ll be recommending that everyone try it.

Try it

Here’s a demo on Debian 9:

The commands I ran:

$ cd ...pkgsrc/mail/qmail-run && make PKG_RCD_SCRIPTS=yes install
$ cd ../../pkgtools/rc.d-boot && make PKG_RCD_SCRIPTS=yes install

On platforms with binary packages available, it’s even easier:

$ sudo env PKG_RCD_SCRIPTS=yes pkgin -y install qmail-run rc.d-boot

These improvements were made possible by acceptutils, my redesigned TLS and SMTP AUTH implementation that obviates the need for several large and conflicting patches. Further improvements are expected.

Here’s the full changelog for qmail as packaged in pkgsrc-2018Q4.




December 16, 2018

Unix Stack Exchange pkgin installation problem (NetBSD)

I just installed NetBSD 7.1.1 (i386) on my old laptop.

During the installation, I could not install pgkin (I don't know why), so I skipped it and now I have a NetBSD 7.1.1 installed on my laptop without pkgin.

My problem is "How to install pkgin on NetBSD (i386) ?"

I found this (Click) tutorial and I followed it:

I tried :

#export PKG_PATH=""
# pkg_add -v pkgin

And I got :

pkg_add: Can't process*: Not Found
pkg_add: no pkg found for 'pkgin',sorry.
pkg_add: 1 package addition failed

I know this is a wrong command because this ftp address is for amd64 while my laptop and NetBSD is i386. (I can't find the correct command for i386 )

I also followed instructions of (Click), and I did

git clone

on another computer and copied the output (which is a folder name pkgin) to my NetBSD (my NetBSD doesn't have 'git' command)

and then I did :

./configure --prefix=/usr/pkg --with-libraries=/usr/pkg/lib --with-includes=/usr/pkg/include

and then :


but I got :

#   compile  pkgin/summary.o
gcc -O2    -std=gnu99    -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wno-sign-compare  -Wno-traditional  -Wa,--fatal-warnings -Wreturn-type -Wswitch -Wshadow -Werror    -DPKGIN_VERSION=\""0.9.4 for NetBSD-7.1.1 i386"\" -DNETBSD  -g -DLOCALBASE=\"/usr/local\"           -DPKG_SYSCONFDIR=\"/usr/local/etc\"         -DPKG_DBDIR="\"/var/db/pkg\""           -DDEF_LOG_DIR="\"/var/db/pkg\""         -DPKGIN_DB=\"/var/db/pkgin\"            -DPKGTOOLS=\"/usr/local/sbin\" -DHAVE_CONFIG_H -D_LARGEFILE_SOURCE -D_LARGE_FILES -DCHECK_MACHINE_ARCH=\"i386\" -Iexternal -I. -I/usr/local/include  -c    summary.c
*** Error code 1

make: stopped in /root/pkgin

I think this error is because of the dependencies. (which is mentioned in but still, don't know how to install those dependencies.

EDIT: I found "" but it still says

no pkg fond for 'pkgin', sorry


Problem solved by writing 7.1 instead of 7.1.1

Unix Stack Exchange How to use 'pkg_add -uu' to upgrade all packages?

According to NetBSD's wiki I can use pkg_add -uu to upgrade packages. However, when I attempt to use pkg_add -uu it results in an error.

pkg_add -uu
pkg_add: missing package name(s)

pkg_add -uu *
pkg_add: no pkg found for `*`, sorry

pkg_add -uu all
pkg_add: no pkg found for `all`, sorry

I've tried to parse the pkg_add man page but I can't tell what the command it to update everything.

I can't use pkg_chk because its not installed, and I can't get the package system to install it:

pkg_chk -b
pkg_chk: command not found

pkg_add pkg_chk
pkg_add: no pkg found for `pkg_chk`, sorry

What is the secret command to get the OS to update everything?

September 15, 2018

Amitai Schlair Coding Tour Summer 2018: Conclusion

After my fourth and final tour stop, we decamped to Mallorca for a week. With no upcoming workshops to polish and no upcoming plans to finalize, the laptop stayed home. Just each other, a variety of beaches, and the annual Les Festes del Rei En Jaume that Bekki and I last saw two years ago on our honeymoon. The parade was perhaps a bit much for Taavi.

Looking away

The just-released episode 99 of Agile for Humans includes some reflections (starting around 50 minutes in) from partway through my coding tour. As our summer in Germany draws to a close, I’d like to reflect on the tour as a whole.

Annual training

I’ve made a habit of setting aside time, attention, and money each year for focused learning. My most recent trainings, all formative and memorable:

I hoped Schleier, Coding Tour would fit the bill for 2018. It has.

Geek joy

At the outset, I was asked how I’d know whether the tour had gone well. My response: “It’s a success if I get to meet a bunch of people in a bunch of places and we have fun programming together.”

I got to program with a bunch of people in a bunch of places. We had fun doing it. Success!

New technologies

My first tour stop offered such an ecumenical mix of languages, tools, and techniques that I began writing down each new technology I encountered. I’m glad I started at the beginning. Even so, this list of things that were new or mostly new to me is probably incomplete:

In the moment, learning new technologies was a source of geek joy. In the aggregate, it’s professionally useful. I think the weight clients tend to place on consultants needing to be expert in their tech stack is dangerously misplaced, but it doesn’t matter what I think if they won’t bring me in. Any chance for me to broaden my tech background is a chance for a future client to take advantage of all the other reasons I can be valuable to them.


As Schmonz’s Theorem predicts, code-touring is both similar to and different from consulting.

When consulting, I expect most of my learning to be meta: the second loop (at least) of double-loop learning. When touring, I became reacquainted with the simple joys of the first loop, spending all day learning new things to be able to do. It often felt like play.

When consulting, I initially find myself being listened to in a peculiar way, my words being heard and measured carefully for evidence of my real intentions. My first tasks are to demonstrate that I can be trusted and that I can be useful, not necessarily in that (or any) order. Accomplishing this as a programmer on tour felt easier than usual.

When I’m consulting, not everyone I encounter wants me there. Some offer time and attention because they feel obligated. On this tour, even though some folks were surprised to find out their employer wasn’t paying me anything, I sensed people were sharing their time and attention with me out of curiosity and generosity. I believe I succeeded in making myself trusted and useful to each of them, and the conversation videos and written testimonials help me hold the belief.

Professional development

With so much practice designing and facilitating group activities, so much information-rich feedback from participants, and so many chances to try again soon, I’ve leveled up as a facilitator. I was comfortable with my skills, abilities, and material before; I’m even more comfortable now. In my tour’s final public meetup, I facilitated one of my legacy code exercises for three simultaneous mobs. It went pretty well — in large part because of the participants, but also because of my continually developing skill at designing and facilitating learning experiences.

As a consultant, it’s a basic survival skill to quickly orient myself in new problem spaces. As a coach, my superpower might be that I help others quickly orient themselves in their problem spaces. Visiting many teams at many companies, I got lots of practice at both. These areas of strength for me are now stronger, the better with which to serve my next clients.

On several occasions I asked mobs not to bother explaining the current context to me before starting the timer. My hypothesis was, all the context I’d need would reveal itself through doing the work and asking a question or two along the way. (One basis among many for this hypothesis: what happened when I showed up late to one of Lennart Fridén’s sessions at this spring’s Mob Programming Conference and everyone else had already read the manual for our CPU.) I think there was one scenario where this didn’t work extremely well, but my memory’s fuzzy — have I mentioned meeting a whole bunch of people at a whole bunch of workplaces, meetups, and conferences? — so I’ll have to report the details when I rediscover it.

You can do this too, and I can help

When designing my tour, I sought advice from several people who’d gone on one. (Along the way I met several more, including Ivan Sanchez at SPA in London and Daniel Temme at SoCraTes in Soltau.)

If you’re wondering whether a coding tour is something you want to do, or how to make it happen, get in touch. I’m happy to listen and offer my suggestions.

What’s next for me, and you can help

Like what I’m doing? Want more of it in your workplace?

I offer short, targeted engagements in the New York metro area — coaching, consulting, and training — co-designed with you to meet your organization’s needs.

More at


Yes, lots.

It’s been a splendid set of privileges to have the free time to go on tour, to have organizations in several countries interested to have me code with them, and to meet so many people who care about what I care about when humans develop software together.

Five years ago I was discovering the existence of a set of communities of shared values in software development and my need to feel connected to them. Today I’m surer than ever that I’ve needed this connection and that I’ve found it.

Thanks to the people who hosted me for a week at their employer: Patrick Drechsler at MATHEMA/Redheads in Erlangen, Alex Schladebeck at BREDEX in Braunschweig, Barney Dellar at Canon Medical Research in Edinburgh, and Thorsten Brunzendorf at codecentric in Nürnberg and München. And thanks to these companies for being willing to take a chance on bringing in an itinerant programmer for a visit.

Thanks and apologies in equal measure to Richard Groß, who did all the legwork to have me visit MaibornWolff in Frankfurt, only to have me cancel at just about the last minute. At least we got to enjoy each other’s company at Agile Coach Camp Germany and SoCraTes (the only two people to attend both!).

Thanks to David Heath at the UK’s Government Digital Service for inviting me to join them on extremely short notice when I had a free day in London, and to Olaf Lewitz for making the connection.

Thanks to the meetups and conferences where I was invited to present: Mallorca Software Craft, SPA Software in Practice, pkgsrcCon, Hackerkegeln, JUG Ostfalen, Lean Agile Edinburgh, NEBytes, and Munich Software Craft. And thanks to Agile Coach Camp Germany and SoCraTes for the open spaces I did my part to fill.

Thanks to Marc Burgauer, Jens Schauder, and Jutta Eckstein for making time to join me for a meal. Thanks to Zeb Ford-Reitz, Barney Dellar, and their respective spice for inviting me into their respective homes for dinner.

Thanks to J.B. Rainsberger for simple, actionable advice on making it easy for European companies to reimburse my expenses, and more broadly on the logistics of going on European consulting-and-speaking tours when one is from elsewhere. (BTW, his next tour begins soon.)

Thanks all over again to everyone who helped me design and plan the tour, most notably Dr. Sal Freudenberg, Llewellyn Falco, and Nicole Rauch.

Thanks to Woody Zuill, Bryan Beecham, and Tim Bourguignon for that serendipitous conversation in the park in London. Thanks to Tim for having been there in the park with me. (No thanks to Woody for waiting till we’d left London before arriving. At least David Heath and GDS got to see him. Hmph.)

Thanks to Lisi Hocke for making my wish a reality: that her testing tour and my coding tour would intersect. As a developer, I have so much to learn about testing and so few chances to learn from the best. She made it happen. A perfect ending for my tour.

Thanks to Ryan Ripley for having me on Agile for Humans a couple more times as the tour progressed. I can’t say enough about what Ryan and his show have done for me, so this’ll have to be enough.

Thanks to everyone else who helped draw special attention to my tour when I was seeking companies to visit, most notably Kent Beck. It really did help.

Another reason companies cited for inviting me: my micropodcast, Agile in 3 Minutes. Thanks to Johanna Rothman, Andrea Goulet, Lanette Creamer, Alex Harms, and Jessica Kerr for your wonderful guest episodes. You’ve done me and our listeners a kindness. I trust it will come back to you.

Thank you to my family for supporting my attempts at growth, especially when I so clearly need it.

Finally, thanks to all of you for following along and for helping me find the kind of consulting work I’m best at, close to home in New York. You can count on me continuing to learn things and continuing to share them with you.


September 01, 2018

Amitai Schlair Coding Tour Stop #4: codecentric

[Update: Thorsten wrote a few words about our week together. They’re quite nice.]

The final week of my coding tour came right on the heels of SoCraTes. On Sunday evening I went straight from Soltau to Nuremberg. On Monday morning, when Thorsten Brunzendorf met me in the lobby of my hotel, it was the first time I’d seen him since… Sunday afternoon. Like several other already-familiar-by-now faces, he’d been at SoCraTes too.

This tour stop was different from the start: we’d spend our first three days in one city and the last two in another.




Mobbing in Scala with Thorsten and Martin


Introducing ‘Strangle Your Legacy Code'


Navigating during Lisi's session

Here are all #CodingTour tweets from the week.


Even though the odds keep stacking higher and higher, the pattern of each tour stop varying meaningfully from the previous ones continued at codecentric. For instance, we chose not to do Learning Hours. I think one reason I didn’t particularly miss it is that I was so busy learning. I’d seen Scala once before, for an hour; here we worked in it all week, both in katas and on production code. And property-based testing had been on my long list of techniques I know I need to learn about. Now I’ve had it explained to me simply and well by Thorsten and I’ve used it enough to know some ways it can help me go where I want to go.

Lisi and me

I’m a fan of Lisi’s testing tour and of how she writes about it. The chance to learn with her felt like the perfect ending for my tour. Here’s her post about it. Next week I’ll offer my reflections on the tour as a whole.

Thorsten, Martin, and I got together Friday afternoon to share some of our highlights from the week. Have a look.

March 17, 2018

Hubert Feyrer The adventure of rebuilding g4u from source
I was asked by a long-time g4u user on help with rebuilding g4u from sources. After pointing at the instructions on the homepage, we figured out that a few lose odds and ends didin't match. After bouncing some advices back and forth, I ventured into the frabjous joy of starting a rebuild from scratch, and quick enough ran into some problems, too.

Usually I cross-compile g4u from Mac OS X, but for the fun of it I did it on NetBSD (7.0-stable branch, amd64 architecture in VMware Fusion) this time. After waiting forever on the CVS checkout, I found that empty directories were not removed - that's what you get if you have -P in your ~/.cvsrc file.

I already had the hint that the "g4u-build" script needed a change to have "G4U_BUILD_KERNEL=true".

From there, things went almost smooth: building indicated a few files that popped up "variable may be used uninitialized" errors, and which -- thanks to -Werror -- bombed out the build. Fixing was easy, and I have no idea why that built for me on the release. I have sent a patch with the required changes to the g4u-help mailing list. (After fixing that I apparently got unsubscribed from my own support mailing list - thank you very much, Sourceforge ;)).

After those little hassles, the build worked fine, and gave me the floppy disk and ISO images that I expected:

>       ls -l `pwd`/g4u*fs
>       -rw-r--r--  2 feyrer  staff  1474560 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u1.fs
>       -rw-r--r--  2 feyrer  staff  1474560 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u2.fs
>       -rw-r--r--  2 feyrer  staff  1474560 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u3.fs
>       -rw-r--r--  2 feyrer  staff  1474560 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u4.fs
>       ls -l `pwd`/g4u.iso
>       -rw-r--r--  2 feyrer  staff  6567936 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u.iso
>       ls -l `pwd`/g4u-kernel.gz
>       -rw-r?r--  1 feyrer  staff  6035680 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u-kernel.gz 
Next steps are to confirm the above changes as working from my faithful tester, and then look into how to merge this into the build instructions .

February 06, 2018

Server Fault ssh tunnel refusing connections with "channel 2: open failed"

All of a sudden (read: without changing any parameters) my netbsd virtualmachine started acting oddly. The symptoms concern ssh tunneling.

From my laptop I launch:

$ ssh -L 7000:localhost:7000 [email protected] -N -v

Then, in another shell:

$ irssi -c localhost -p 7000

The ssh debug says:

debug1: Connection to port 7000 forwarding to localhost port 7000 requested.
debug1: channel 2: new [direct-tcpip]
channel 2: open failed: connect failed: Connection refused
debug1: channel 2: free: direct-tcpip: listening port 7000 for localhost port 7000, connect from port 53954, nchannels 3

I tried also with localhost:80 to connect to the (remote) web server, with identical results.

The remote host runs NetBSD:

bash-4.2# uname -a
NetBSD host 5.1_STABLE NetBSD 5.1_STABLE (XEN3PAE_DOMU) #6: Fri Nov  4 16:56:31 MET 2011  [email protected]:/m/obj/m/src/sys/arch/i386/compile/XEN3PAE_DOMU i386

I am a bit lost. I tried running tcpdump on the remote host, and I spotted these 'bad chksum':

09:25:55.823849 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 67, bad cksum 0 (->3cb3)!) > P, cksum 0xfe37 (incorrect (-> 0xa801), 1622402406:1622402421(15) ack 1635127887 win 4096 <nop,nop,timestamp 5002727 5002603>

I tried restarting the ssh daemon to no avail. I haven't rebooted yet - perhaps somebody here can suggest other diagnostics. I think it might either be the virtual network card driver, or somebody rooted our ssh.


January 12, 2018

Super User What is the default File System in NetBSD? What are it's benefits and shortcommings?

I spent some time looking through the documentation, but honestly, I have not found any good answer.

I understand NetBSD supports many FS types as USER SPACE, but I would like to know what is the default FS created by the installer, and the one which I could boot from.

January 04, 2018

Hubert Feyrer NetBSD 7.1.1 released
On December 22nd, NetBSD 7.1.1 was released as premature christmas present, see the release annoucement.

NetBSD 7.1.1 is the first update with security and critical fixes for the NetBSD 7.1 branch. Those include a number of fixes for security advisories, kernel and userland.

Hubert Feyrer New year, new security advisories!
So things have become a bit silent here, which is due to reallife - my apologies. Still, I'd like to wish everyone following this here a Happy New Year 2018! And with this, a few new security advisories have been published:
Hubert Feyrer 34C3 talk: Are all BSDs created equally?
I haven't seen this mentioned on the NetBSD mailing lists, and this may be of interest to some - there was a talk about security bugs in the various BSDs at the 34th Chaos Communication Congress:

In summary, many reasons for bugs are shown in many areas of the kernel (system calls, file systems, network stack, compat layer, ...), and what has happened after they were made known to the projects.

As a hint, NetBSD still has a number of Security Advisories to publish, it seems. Anyone wants to help out the security team? :-)

November 03, 2017

Super User Install Linux on Old AirPort Extreme?

I have a very old AirPort Extreme, the A1408. Is it possible to install Linux on it, using the AirPort functionally as a hard disk, and then boot from that? I have also heard that AirPorts run NetBSD. Can you boot into that and run commands?

August 06, 2017

Super User Trying to install NetBSD 7.1 on QEMU/KVM

I am using OpenSUSE Leap 42.2 on a Dell Inspiron 1545.  An error occurs when I try to load NetBSD into QEMU/KVM: "module 'cd9660' pushed by bootloader already exists." I installed Lubuntu 16.04 LTS in QEMU/KVM previously.

June 22, 2017

Server Fault How to log ssh client connection/command?

I would like to know how i could log SSH command lines a user is using on a server. For exemple, if the user Alex on my server is doing the following set of commands :

$ cd /tmp
$ touch myfile
$ ssh [email protected]
$ ssh [email protected]
$ vim anotherfile
$ ssh [email protected]

I would like to log the ssh commands used on the server in a file which looks like :

[2014-07-25 10:10:10] Alex : ssh [email protected]
[2014-07-25 10:18:20] Alex : ssh [email protected]
[2014-07-25 11:15:10] Alex : ssh [email protected]

I don't care what he did during his ssh session, i just want to know WHEN and TO WHERE he made a connection to another server.

The user is not using bash and i would like to avoid manipulating .bash_history anyway as the user can modify it.

Any clue on this ?

Thank you :)

edit : to be more specific :

a user connects to a server A and then connects from the server A to server B. I want to track down to which server he connects through ssh from server A.

June 11, 2017

NetBSD Installation and Upgrading on DaemonForums Update via CVS
Regarding to:

How can I tell NetBSD to only download x86 regarding files?
Is there a way to upgrade/install updates, which is more comparable to MS Windows updates?


June 08, 2017

Hubert Feyrer g4u 2.6 released
After a five-year period for beta-testing and updating, I have finally released g4u 2.6. With its origins in 1999, I'd like to say: Happy 18th Birthday, g4u!

About g4u: g4u ("ghosting for unix") is a NetBSD-based bootfloppy/CD-ROM that allows easy cloning of PC harddisks to deploy a common setup on a number of PCs using FTP. The floppy/CD offers two functions. The first is to upload the compressed image of a local harddisk to a FTP server, the other is to restore that image via FTP, uncompress it and write it back to disk. Network configuration is fetched via DHCP. As the harddisk is processed as an image, any filesystem and operating system can be deployed using g4u. Easy cloning of local disks as well as partitions is also supported.

The past: When I started g4u, I had the task to install a number of lab machines with a dual-boot of Windows NT and NetBSD. The hype was about Microsoft's "Zero Administration Kit" (ZAK) then, but that did barely work for the Windows part - file transfers were slow, depended on the clients' hardware a lot (requiring fiddling with MS DOS network driver disks), and on the ZAK server the files for installing happened do disappear for no good reason every now and then. Not working well, and leaving out NetBSD (and everything elase), I created g4u. This gave me the (relative) pain of getting things working once, but with the option to easily add network drivers as they appeared in NetBSD (and oh they did!), plus allowed me to install any operating system.

The present: We've used g4u successfully in our labs then, booting from CDROM. I also got many donations from public and private instituations plus comanies from many sectors, indicating that g4u does make a difference.

In the mean time, the world has changed, and CDROMs aren't used that much any more. Network boot and USB sticks are today's devices of choice, cloning of a full disk without knowing its structure has both advantages but also disadvantages, and g4u's user interface is still command-line based with not much space for automation. For storage, FTP servers are nice and fast, but alternatives like SSH/SFTP, NFS, iSCSI and SMB for remote storage plus local storage (back to fun with filesystems, anyone? avoiding this was why g4u was created in the first place!) should be considered these days. Further aspects include integrity (checksums), confidentiality (encryption). This leaves a number of open points to address either by future releases, or by other products.

The future: At this point, my time budget for g4u is very limited. I welcome people to contribute to g4u - g4u is Open Source for a reason. Feel free to get back to me for any changes that you want to contribute!

The changes: Major changes in g4u 2.6 include:

The software: Please see the g4u homepage's download section on how to get and use g4u.


June 05, 2017

NetBSD General on DaemonForums X11R7 on NetBsd
I installed NetBsd on a older computer, and it is working basicly.
This is my first time with NetBsd,
However, I am having trouble getting the spanish keyboard to work
after I 'startx'.
From the console, when I first boot, the keyboard is fine, and the keys
work as expected, it is after I start the xsever that it become a problem.
I did read some manuals and info,..generated a

# X -config ~/
As instructed here:

But I can not find any X11 in /etc as mentioned:

If the above test was successful, move the file into place (as either /etc/X11/xorg.conf or /etc/X11/XF86Config) and you are ready to go. The following sections may be of interest or use, but are not required reading.
So I am wondering, not only what to put in the .conf file, the example they show is
for a german keyboard, and reveseing some keys,...I do not want to do that.
Just want it to know it is a spanish keyboard, no reversed keys, etc.
So, what to put in the file, and where to place the . conf file.
Thank you

June 02, 2017

NetBSD General on DaemonForums Looking for Thinkpad Recommendations
I am thinking about upgrading to a Thinkpad X230 for use with NetBSD. It looks like it should be fairly well supported but was hoping someone could confirm or deny if suspend and hibernate work? Im currently on a X201 that is getting long in the tooth and it won't wake from suspend well on NetBSD. (As documented by others here ) I'd like to stay with the X series but if anyone could recommend a Thinkpad newer than the X201 that wasn't too chunky id love the recommendation.

June 01, 2017

NetBSD Installation and Upgrading on DaemonForums Boot problems
So I Installed netBSD on an old pentium 1 laptop I had laying around with 32MB ram. The problem is when it goes to boot it hangs on boot: and then I don't know what to type. If I do boot hd0a:netbsd the text turns green but eventually it hangs after attempting to load. I used to use a boot command that seemed to work but now I don't remember what it is.

I also installed FreeBSD on another pc onto the HDD since the installer would not work on this old pc. But when I put it back in it also hangs on boot:

Something must be up with how this old pc uses mbr with hdd? I am not sure. I am using CF to IDE converter if that makes a different. I am also fairly new with BSD but have had more linux experience.

February 23, 2017

Julio Merino Easy pkgsrc on macOS with pkg_comp 2.0

This is a tutorial to guide you through the shiny new pkg_comp 2.0 on macOS using the macOS-specific self-installer.

Goals: to use pkg_comp 2.0 to build a binary repository of all the packages you are interested in; to keep the repository fresh on a daily basis; and to use that repository with pkgin to maintain your macOS system up-to-date and secure.

This tutorial is specifically targeted at macOS and relies on the macOS-specific self-installer package. For a more generic tutorial that uses the pkg_comp-cron package in pkgsrc, see Keeping NetBSD up-to-date with pkg_comp 2.0.

Getting started

Start by downloading and installing OSXFUSE 3 and then download the standalone macOS installer package for pkg_comp. To find the right file, navigate to the releases page on GitHub, pick the most recent release, and download the file with a name of the form pkg_comp-<version>-macos.pkg.

Then double-click on the file you downloaded and follow the installation instructions. You will be asked for your administrator password because the installer has to place files under /usr/local/; note that pkg_comp requires root privileges anyway to run (because it uses chroot(8) internally), so you will have to grant permission at some point or another.

The installer modifies the default PATH (by creating /etc/paths.d/pkg_comp) to include pkg_comp’s own installation directory and pkgsrc’s installation prefix. Restart your shell sessions to make this change effective, or update your own shell startup scripts accordingly if you don’t use the standard ones.

Lastly, make sure to have Xcode installed in the standard /Applications/ location and that all components required to build command-line apps are available. Tip: try running cc from the command line and seeing if it prints its usage message.

Adjusting the configuration

The macOS flavor of pkg_comp is configured with an installation prefix of /usr/local/, which means that the executable is located in /usr/local/sbin/pkg_comp and the configuration files are in /usr/local/etc/pkg_comp/. This is intentional to keep the pkg_comp installation separate from your pkgsrc installation so that it can run no matter what state your pkgsrc installation is in.

The configuration files are as follows:

Note that these configuration files use the /var/pkg_comp/ directory as the dumping ground for: the pkgsrc tree, the downloaded distribution files, and the built binary packages. We will see references to this location later on.

The cron job

The installer configures a cron job that runs as root to invoke pkg_comp daily. The goal of this cron job is to keep your local packages repository up-to-date so that you can do binary upgrades at any time. You can edit the cron job configuration interactively by running sudo crontab -e.

This cron job won’t have an effect until you have populated the list.txt file as described above, so it’s safe to let it enabled until you have configured pkg_comp.

If you want to disable the periodic builds, just remove the pkg_comp entry from the crontab.

On slow machines, or if you are building a lot of packages, you may want to consider decreasing the build frequency from @daily to @weekly.

Sample configuration

Here is what the configuration looks like on my Mac Mini as dumped by the config subcommand. Use this output to get an idea of what to expect. I’ll be using the values shown here in the rest of the tutorial:

$ pkg_comp config
AUTO_PACKAGES = autoconf automake bash colordiff dash emacs24-nox11 emacs25-nox11 fuse-bindfs fuse-sshfs fuse-unionfs gdb git-base git-docs glib2 gmake gnuls libtool-base lua52 mercurial mozilla-rootcerts mysql56-server pdksh pkg_developer pkgconf pkgin ruby-jekyll ruby-jekyll-archives ruby-jekyll-paginate scmcvs smartmontools sqlite3 tmux vim
CVS_ROOT = :ext:[email protected]:/cvsroot
CVS_TAG is undefined
DISTDIR = /var/pkg_comp/distfiles
EXTRA_MKCONF = /usr/local/etc/pkg_comp/
GIT_BRANCH = trunk
LOCALBASE = /opt/pkg
PACKAGES = /var/pkg_comp/packages
PBULK_PACKAGES = /var/pkg_comp/pbulk-packages
PKG_DBDIR = /opt/pkg/libdata/pkgdb
PKGSRCDIR = /var/pkg_comp/pkgsrc
SANDBOX_CONFFILE = /usr/local/etc/pkg_comp/sandbox.conf
SYSCONFDIR = /opt/pkg/etc
VARBASE = /opt/pkg/var

SANDBOX_ROOT = /var/pkg_comp/sandbox
SANDBOX_TYPE = darwin-native

Building your own packages by hand

Now that you are fully installed and configured, you’ll build some stuff by hand to ensure the setup works before the cron job comes in.

The simplest usage form, which involves full automation and assumes you have listed at least one package in list.txt, is something like this:

$ sudo pkg_comp auto

This trivially-looking command will:

  1. clone or update your copy of pkgsrc;
  2. create the sandbox;
  3. bootstrap pkgsrc and pbulk;
  4. use pbulk to build the given packages; and
  5. destroy the sandbox.

After a successful invocation, you’ll be left with a collection of packages in the /var/pkg_comp/packages/ directory.

If you’d like to restrict the set of packages to build during a manually-triggered build, provide those as arguments to auto. This will override the contents of AUTO_PACKAGES (which was derived from your list.txt file).

But what if you wanted to invoke all stages separately, bypassing auto? The command above would be equivalent to:

$ sudo pkg_comp fetch
$ sudo pkg_comp sandbox-create
$ sudo pkg_comp bootstrap
$ sudo pkg_comp build <package names here>
$ sudo pkg_comp sandbox-destroy

Go ahead and play with these. You can also use the sandbox-shell command to interactively enter the sandbox. See pkg_comp(8) for more details.

Lastly note that the root user will receive email messages if the periodic pkg_comp cron job fails, but only if it fails. That said, you can find the full logs for all builds, successful or not, under /var/pkg_comp/log/.

Installing the resulting packages

Now that you have built your first set of packages, you will want to install them. This is easy on macOS because you did not use pkgsrc itself to install pkg_comp.

First, unpack the pkgsrc installation. You only have to do this once:

$ cd /
$ sudo tar xzvpf /var/pkg_comp/packages/bootstrap.tgz

That’s it. You can now install any packages you like:

$ PKG_PATH=file:///var/pkg_comp/packages/All sudo pkg_add pkgin <other package names>

The command above assume you have restarted your shell to pick up the correct path to the pkgsrc installation. If the call to pkg_add fails because of a missing binary, try restarting your shell or explicitly running the binary as /opt/pkg/sbin/pkg_add.

Keeping your system up-to-date

Thanks to the cron job that builds your packages, your local repository under /var/pkg_comp/packages/ will always be up-to-date; you can use that to quickly upgrade your system with minimal downtime.

Assuming you are going to use pkgtools/pkgin as recommended above (and why not?), configure your local repository:

$ sudo /bin/sh -c "echo file:///var/pkg_comp/packages/All >>/opt/pkg/etc/pkgin/repositories.conf"

And, from now on, all it takes to upgrade your system is:

$ sudo pkgin update
$ sudo pkgin upgrade


February 18, 2017

Julio Merino Keeping NetBSD up-to-date with pkg_comp 2.0

This is a tutorial to guide you through the shiny new pkg_comp 2.0 on NetBSD.

Goals: to use pkg_comp 2.0 to build a binary repository of all the packages you are interested in; to keep the repository fresh on a daily basis; and to use that repository with pkgin to maintain your NetBSD system up-to-date and secure.

This tutorial is specifically targeted at NetBSD but should work on other platforms with some small changes. Expect, at the very least, a macOS-specific tutorial as soon as I create a pkg_comp standalone installer for that platform.

Getting started

First install the sysutils/sysbuild-user package and trigger a full build of NetBSD so that you get usable release sets for pkg_comp. See sysbuild(1) and pkg_info sysbuild-user for details on how to do so. Alternatively, download release sets from the FTP site and later tell pkg_comp where they are.

Then install the pkgtools/pkg_comp-cron package. The rest of this tutorial assumes you have done so.

Adjusting the configuration

To use pkg_comp for periodic builds, you’ll need to do some minimal edits to the default configuration files. The files can be found directly under /var/pkg_comp/, which is pkg_comp-cron’s “home”:

Lastly, review root’s crontab to ensure the job specification for pkg_comp is sane. On slow machines, or if you are building many packages, you will probably want to decrease the build frequency from @daily to @weekly.

Sample configuration

Here is what the configuration looks like on my NetBSD development machine as dumped by the config subcommand. Use this output to get an idea of what to expect. I’ll be using the values shown here in the rest of the tutorial:

# pkg_comp -c /var/pkg_comp/pkg_comp.conf config
AUTO_PACKAGES = autoconf automake bash colordiff dash emacs-nox11 git-base git-docs gmake gnuls lua52 mozilla-rootcerts pdksh pkg_comp-cron pkg_developer pkgin sqlite3 sudo sysbuild sysbuild-user sysupgrade tmux vim zsh
CVS_ROOT = :ext:[email protected]:/cvsroot
CVS_TAG is undefined
DISTDIR = /var/pkg_comp/distfiles
EXTRA_MKCONF = /var/pkg_comp/
GIT_BRANCH = trunk
LOCALBASE = /usr/pkg
PACKAGES = /var/pkg_comp/packages
PBULK_PACKAGES = /var/pkg_comp/pbulk-packages
PKG_DBDIR = /usr/pkg/libdata/pkgdb
PKGSRCDIR = /var/pkg_comp/pkgsrc
SANDBOX_CONFFILE = /var/pkg_comp/sandbox.conf
VARBASE = /var

NETBSD_NATIVE_RELEASEDIR = /home/sysbuild/release/amd64
NETBSD_RELEASE_RELEASEDIR = /home/sysbuild/release/amd64
SANDBOX_ROOT = /var/pkg_comp/sandbox
SANDBOX_TYPE = netbsd-release

Building your own packages by hand

Now that you are fully installed and configured, you’ll build some stuff by hand to ensure the setup works before the cron job comes in.

The simplest usage form, which involves full automation, is something like this:

# pkg_comp -c /var/pkg_comp/pkg_comp.conf auto

This trivially-looking command will:

  1. checkout or update your copy of pkgsrc;
  2. create the sandbox;
  3. bootstrap pkgsrc and pbulk;
  4. use pbulk to build the given packages; and
  5. destroy the sandbox.

After a successful invocation, you’ll be left with a collection of packages in the directory you set in PACKAGES, which in the default pkg_comp-cron installation is /var/pkg_comp/packages/.

If you’d like to restrict the set of packages to build during a manually-triggered build, provide those as arguments to auto. This will override the contents of AUTO_PACKAGES (which was derived from your list.txt file).

But what if you wanted to invoke all stages separately, bypassing auto? The command above would be equivalent to:

# pkg_comp -c /var/pkg_comp/pkg_comp.conf fetch
# pkg_comp -c /var/pkg_comp/pkg_comp.conf sandbox-create
# pkg_comp -c /var/pkg_comp/pkg_comp.conf bootstrap
# pkg_comp -c /var/pkg_comp/pkg_comp.conf build <package names here>
# pkg_comp -c /var/pkg_comp/pkg_comp.conf sandbox-destroy

Go ahead and play with these. You can also use the sandbox-shell command to interactively enter the sandbox. See pkg_comp(8) for more details.

Lastly note that the root user will receive email messages if the periodic pkg_comp cron job fails, but only if it fails. That said, you can find the full logs for all builds, successful or not, under /var/pkg_comp/log/.

Installing the resulting packages

Now that you have built your first set of packages, you will want to install them. On NetBSD, the default pkg_comp-cron configuration produces a set of packages for /usr/pkg so you have to wipe your existing packages first to avoid build mismatches.

WARNING: Yes, you really have to wipe your packages. pkg_comp currently does not recognize the package tools that ship with the NetBSD base system (i.e. it bootstraps pkgsrc unconditionally, including bmake), which means that the newly-built packages won’t be compatible with the ones you already have. Avoid any trouble by starting afresh.

To clean your system, do something like this:

# ... ensure your login shell lives in /bin! ...
# pkg_delete -r -R "*"
# mv /usr/pkg/etc /root/etc.old  # Backup any modified files.
# rm -rf /usr/pkg /var/db/pkg*

Now, rebootstrap pkgsrc and reinstall any packages you previously had:

# cd /
# tar xzvpf /var/pkg_comp/packages/bootstrap.tgz
# echo "pkg_admin=/usr/pkg/sbin/pkg_admin" >>/etc/pkgpath.conf
# echo "pkg_info=/usr/pkg/sbin/pkg_info" >>/etc/pkgpath.conf
# export PATH=/usr/pkg/bin:/usr/pkg/sbin:${PATH}
# export PKG_PATH=file:///var/pkg_comp/packages/All
# pkg_add pkgin pkg_comp-cron <other package names>

Finally, reconfigure any packages where you had have previously made custom edits. Use the backup in /root/etc.old to properly update the corresponding files in /etc. I doubt you made a ton of edits so this should be easy.

IMPORTANT: Note that the last command in this example includes pkgin and pkg_comp-cron. You should install these first to ensure you can continue with the next steps in this tutorial.

Keeping your system up-to-date

If you paid attention when you installed the pkg_comp-cron package, you should have noticed that this configured a cron job to run pkg_comp daily. This means that your packages repository under /var/pkg_comp/packages/ will always be up-to-date so you can use that to quickly upgrade your system with minimal downtime.

Assuming you are going to use pkgtools/pkgin (and why not?), configure your local repository:

# echo 'file:///var/pkg_comp/packages/All' >>/etc/pkgin/repositories.conf

And, from now on, all it takes to upgrade your system is:

# pkgin update
# pkgin upgrade


February 17, 2017

Julio Merino Introducing pkg_comp 2.0 (and sandboxctl 1.0)

After many (many) years in the making, pkg_comp 2.0 and its companion sandboxctl 1.0 are finally here!

Read below for more details on this launch. I will publish detailed step-by-step tutorials on setting up periodic package rebuilds in separate posts.

What are these tools?

pkg_comp is an automation tool to build pkgsrc binary packages inside a chroot-based sandbox. The main goal is to fully automate the process and to produce clean and reproducible packages. A secondary goal is to support building binary packages for a different system than the one doing the builds: e.g. building packages for NetBSD/i386 6.0 from a NetBSD/amd64 7.0 host.

The highlights of pkg_comp 2.0, compared to the 1.x series, are: multi-platform support, including NetBSD, FreeBSD, Linux, and macOS; use of pbulk for efficient builds; management of the pkgsrc tree itself via CVS or Git; and a more robust and modern codebase.

sandboxctl is an automation tool to create and manage chroot-based sandboxes on a variety of operating systems. sandboxctl is the backing tool behind pk_comp. sandboxctl hides the details of creating a functional chroot sandbox on all supported operating systems; in some cases, like building a NetBSD sandbox using release sets, things are easy; but in others, like on macOS, they are horrifyingly difficult and brittle.

Storytelling time

pkg_comp’s history is a long one. pkg_comp 1.0 first appeared in pkgsrc on September 6th, 2002 as the pkgtools/pkg_comp package in pkgsrc. As of this writing, the 1.x series are at version 1.38 and have received contributions from a bunch of pkgsrc developers and external users; even more, the tool was featured in the BSD Hacks book back in 2004.

This is a long time for a shell script to survive in its rudimentary original form: pkg_comp 1.x is now a teenager at its 14 years of age and is possibly one of my longest-living pieces of software still in use.

Motivation for the 2.x rewrite

For many of these years, I have been wanting to rewrite pkg_comp to support other operating systems. This all started when I first got a Mac in 2005, at which time pkgsrc already supported Darwin but there was no easy mechanism to manage package updates. What would happen—and still happens to this day!—is that, once in a while, I’d realize that my packages were out of date (read: insecure) so I’d wipe the whole pkgsrc installation and start from scratch. Very inconvenient; I had to automate that properly.

Thus the main motivation behind the rewrite was primarily to support macOS because this was, and still is, my primary development platform. The secondary motivation came after writing sysbuild in 2012, which trivially configured daily builds of the NetBSD base system from cron; I wanted the exact same thing for my packages.

One, two… no, three rewrites

The first rewrite attempt was sometime in 2006, soon after I learned Haskell in school. Why Haskell? Just because that was the new hotness in my mind and it seemed like a robust language to drive a pretty tricky automation process. That rewrite did not go very far, and that’s possibly for the better: relying on Haskell would have decreased the portability of the tool, made it hard to install it, and guaranteed to alienate contributors.

The second rewrite attempt started sometime in 2010, about a year after I joined Google as an SRE. This was after I became quite familiar with Python at work, wanting to use the language to rewrite this tool. That experiment didn’t go very far though, but I can’t remember why… probably because I was busy enough at work and creating Kyua.

The third and final rewrite attempt started in 2013 while I had a summer intern and I had a little existential crisis. The year before I had written sysbuild and shtk, so I figured recreating pkg_comp using the foundations laid out by these tools would be easy. And it was… to some extent.

Getting the barebones of a functional tool took only a few weeks, but that code was far from being stable, portable, and publishable. Life and work happened, so this fell through the cracks… until late last year, when I decided it was time to close this chapter so I could move on to some other project ideas. To create the focus and free time required to complete this project, I had to shift my schedule to start the day at 5am instead of 7am—and, many weeks later, the code is finally here and I’m still keeping up with this schedule.

Granted: this third rewrite is not a fancy one, but it wasn’t meant to be. pkg_comp 2.0 is still written in shell, just as 1.x was, but this is a good thing because bootstrapping on all supported platforms is easy. I have to confess that I also considered Go recently after playing with it last year but I quickly let go of that thought: at some point I had to ship the 2.0 release, and 10 years since the inception of this rewrite was about time.

The launch of 2.0

On February 12th, 2017, the authoritative sources of pkg_comp 1.x were moved from pkgtools/pkg_comp to pkgtools/pkg_comp1 to make room for the import of 2.0. Yes, the 1.x series only existed in pkgsrc and the 2.x series exist as a standalone project on GitHub.

And here we are. Today, February 17th, 2017, pkg_comp 2.0 saw the light!

Why sandboxctl as a separate tool?

sandboxctl is the supporting tool behind pkg_comp, taking care of all the logic involved in creating chroot-based sandboxes on a variety of operating systems. Some are easy, like building a NetBSD sandbox using release sets, and others are horrifyingly difficult like macOS.

In pkg_comp 1.x, this logic used to be bundled right into the pkg_comp code, which made it pretty much impossible to generalize for portability. With pkg_comp 2.x, I decided to split this out into a separate tool to keep responsibilities isolated. Yes, the integration between the two tools is a bit tricky, but allows for better testability and understandability. Lastly, having sandboxctl as a standalone tool, instead of just a separate code module, gives you the option of using it for your own sandboxing needs.

I know, I know; the world has moved onto containerization and virtual machines, leaving chroot-based sandboxes as a very rudimentary thing… but that’s all we’ve got in NetBSD, and pkg_comp targets primarily NetBSD. Note, though, that because pkg_comp is separate from sandboxctl, there is nothing preventing adding different sandboxing backends to pkg_comp.


Installation is still a bit convoluted unless you are on one of the tier 1 NetBSD platforms or you already have pkgsrc up and running. For macOS in particular, I plan on creating and shipping a installer image that includes all of pkg_comp dependencies—but I did not want to block the first launch on this.

For now though, you need to download and install the latest source releases of shtk, sandboxctl, and pkg_comp—in this order; pass the --with-atf=no flag to the configure scripts to cut down the required dependencies. On macOS, you will also need OSXFUSE and the bindfs file system.

If you are already using pkgsrc, you can install the pkgtools/pkg_comp package to get the basic tool and its dependencies in place, or you can install the wrapper pkgtools/pkg_comp-cron package to create a pre-configured environment with a daily cron job to run your builds. See the package’s MESSAGE (with pkg_info pkg_comp-cron) for more details.


Both pkg_comp and sandboxctl are fully documented in manual pages. See pkg_comp(8), sandboxctl(8), pkg_comp.conf(5) and sandbox.conf(5) for plenty of additional details.

As mentioned at the beginning of the post, I plan on publishing one or more tutorials explaining how to bootstrap your pkgsrc installation using pkg_comp on, at least, NetBSD and macOS. Stay tuned.

And, if you need support or find anything wrong, please let me know by filing bugs in the corresponding GitHub projects: jmmv/pkg_comp and jmmv/sandboxctl.

February 09, 2017

BSD Talk bsdtalk266 - The nodes take over
We became tired of waiting. File Info: 7Min, 3MB. Ogg Link:

January 22, 2017

Emile Heitor CPU temperature collectd report on NetBSD

pkgsrc’s collectd does not support the thermal plugin, so in order to publish thermal information I had to use the exec plugin:

LoadPlugin exec
# more plugins

<Plugin exec>
Exec "nobody:nogroup" "/home/imil/bin/"

And write this simple script that reads CPUs temperature from NetBSD’s envstat command:

$ cat bin/ 


while :
envstat|awk '/cpu[0-9]/ {printf "%s %s\n",$1,$3}'|while read c t
echo "PUTVAL ${hostname}/temperature/temperature-zone${c#cpu} interval=${interval} N:${t%%.*}"
sleep ${interval}

I then send those values to an influxdb server:

LoadPlugin network
# ...

<Plugin network>
Server "" "25826"

And display them using grafana:

grafana setup
NetBSD temperature in grafana


December 09, 2016

Super User NetBSD and TP-Link TL-WN727N (Atheros AR9271 or Ralink RT5370)

Where can I find and install an AR9271 driver for the latest NetBSD? The target machine does not have Internet access and I need to setup the WiFi dongle first.

I only found this:

UPDATE: wpa_supplicant was already written, but I didn't see my device.

When I plug in the dongle it's shown as:

ugen0 at uhub4 port 8 
ugen0: Mediatek 802.11 n WLAN, rev 2.01/00, addr 2 

ifconfig shows only re0 and lo0 interfaces.

UPDATE: I saw on some Linux forums that the dongle uses an Atheros chip, but I checked in Windows and see Ralink. The ral driver is also integrated in NetBSD, but the situation doesn't change - I see no ra~ device in dmesg.boot.

September 14, 2016

Julio Merino #! /usr/bin/env considered harmful

Many programming guides recommend to begin scripts with the #! /usr/bin/env shebang in order to to automatically locate the necessary interpreter. For example, for a Python script you would use #! /usr/bin/env python, and then the saying goes, the script would “just work” on any machine with Python installed.

The reason for this recommendation is that /usr/bin/env python will search the PATH for a program called python and execute the first one found… and that usually works fine on one’s own machine.

Unfortunately, this advice is plagued with problems and assuming it will work is wishful thinking. Let me elaborate. I’ll use Python below for illustration purposes but the following applies equally to any other interpreted language.


i) The first problem is that using #! /usr/bin/env lets you find an interpreter but not necessarily the correct interpreter. In our example above, we told the system to look for an interpreter called python… but we did not say anything about the compatible versions. Did you want Python 2.x or 3.x? Or maybe “exactly 2.7”? Or “at least 3.2”? You can’t tell right? So the the computer can’t tell either; regardless, the script will probably run with whichever version happens to be called python–which could be any thanks to the alternatives system. The danger is that, if the version is mismatched, the script will fail and the failure can manifest itself at a much later stage (e.g. a syntax error in an infrequent code path) under obscure circumstances.

ii) The second problem, assuming you ignore the version problem above because your script is compatible with all possible versions (hah), is that you may pick up an interpreter that does not have all prerequisite dependencies installed. Say your script decides to import a bunch of third-party modules: where are those modules located? Typically, the modules exist in a centralized repository that is specific to the interpreter installation (e.g. a .../lib/python2.7/site-packages/ directory that lives alongside the interpreter binary). So maybe your program found a Python 2.7 under /usr/local/bin/ but in reality you needed it to find the one in /usr/bin/ because that’s where all your Python modules are. If that happens, you’ll receive an obscure error that doesn’t properly describe the exact cause of the problem you got.

iii) The third problem, assuming your script is portable to all versions (hah again) and that you don’t need any modules (really?), is that you are assuming that the interpreter is available via a specific name. Unfortunately, the name of the interpreter can vary. For example: pkgsrc installs all python binaries with explicitly-versioned names (e.g. python2.7 and python3.0) to avoid ambiguity, and no python symlink is created by default… which means your script won’t run at all even when Python is seemingly installed.

iv) The fourth problem is that you cannot pass flags to the interpreter. The shebang line is intended to contain the name of the interpreter plus a single argument to it. Using /usr/bin/env as the interpreter name consumes the first slot and the name of the interpreter consumes the second, so there is no room to pass additional flags to the program. What happens with the rest of the arguments is platform-dependent: they may be all passed as a single string to env or they may be tokenized as individual arguments. This is not a huge deal though: one argument for flags is too restricted anyway and you can usually set up the interpreter later from within the script.

v) The fifth and worst problem is that your script is at the mercy of the user’s environment configuration. If the user has a “misconfigured” PATH, your script will mysteriously fail at run time in ways that you cannot expect and in ways that may be very difficult to troubleshoot later on. I quote “misconfigured” because the problem here is very subtle. For example: I do have a shell configuration that I carry across many different machines and various operating systems; such configuration has complex logic to determine a sane PATH regardless of the system I’m in… but this, in turn, means that the PATH can end up containing more than one version of the same program. This is fine for interactive shell use, but it’s not OK for any program to assume that my PATH will match their expectations.

vi) The sixth and last problem is that a script prefixed with #! /usr/bin/env is not suitable to being installed. This is justified by all the other points illustrated above: once a program is installed on the system, it must behave deterministically no matter how it is invoked. More importantly, when you install a program, you do so under a set of assumptions gathered by a configure-like script or prespecified by a package manager. To ensure things work, the installed script must see the exact same environment that was specified at installation time. In particular, the script must point at the correct interpreter version and at the interpreter that has access to all package dependencies.

So what to do?

All this considered, you may still use #! /usr/bin/env for the convenience of your own throwaway scripts (those that don’t leave your machine) and also for documentation purposes and as a placeholder for a better default.

For anything else, here are some possible alternatives to using this harmful shebang:

Just don’t assume that the magic #! /usr/bin/env foo is sufficient or even correct for the final installed program.

Bonus chatter: There is a myth that the original shebang prefix was #! / so that the kernel could look for it as a 32-bit magic cookie at the beginning of an executable file. I actually believed this myth for a long time… until today, as a couple of readers pointed me at The #! magic, details about the shebang/hash-bang mechanism on various Unix flavours with interesting background that contradicts this.

July 08, 2016

Frederic Cambus NetBSD on the CubieBoard2

Here are some notes on installing and running NetBSD/evbarm on the AllWinner A20 powered CubieBoard2. I bought this board a few weeks ago for its SATA capabilities, despite the fact that there are now cheaper boards with more powerful CPUs.

Required steps for creating a bootable micro SD card are detailed on the NetBSD Wiki, and a NetBSD installation is required to run mkubootimage.

I used an USB to TTL serial cable to connect to the board and create user accounts. Do not be afraid of serial, as it has in fact only advantages: there is no need to connect an USB keyboard nor an HDMI display, and it also brings back nice memories.

Connecting using cu (from my OpenBSD machine):

cu -s 115200 -l /dev/cuaU0

Device name might be different when using cu on other operating systems.

Adding a regular user in the wheel group:

useradd -m -G wheel username

Adding a password to the newly created user and changing default shell to ksh:

passwd username
chpass -s /bin/ksh username

Installing and configuring pkgin:

export PKG_PATH=
pkg_add pkgin
echo $PKG_PATH > /usr/pkg/etc/pkgin/repositories.conf
pkgin update

Finally, here is a dmesg for reference purposes:

Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
    2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015
    The NetBSD Foundation, Inc.  All rights reserved.
Copyright (c) 1982, 1986, 1989, 1991, 1993
    The Regents of the University of California.  All rights reserved.

NetBSD 7.0.1 (CUBIEBOARD.201605221355Z)
total memory = 1024 MB
avail memory = 1008 MB
sysctl_createv: sysctl_create(machine_arch) returned 17
timecounter: Timecounters tick every 10.000 msec
mainbus0 (root)
cpu0 at mainbus0 core 0: 912 MHz Cortex-A7 r0p4 (Cortex V7A core)
cpu0: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu0: isar: [0]=0x2101110 [1]=0x13112111 [2]=0x21232041 [3]=0x11112131, [4]=0x10011142, [5]=0
cpu0: mmfr: [0]=0x10101105 [1]=0x40000000 [2]=0x1240000 [3]=0x2102211
cpu0: pfr: [0]=0x1131 [1]=0x11011
cpu0: 32KB/32B 2-way L1 VIPT Instruction cache
cpu0: 32KB/64B 4-way write-back-locking-C L1 PIPT Data cache
cpu0: 256KB/64B 8-way write-through L2 PIPT Unified cache
vfp0 at cpu0: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
vfp0: mvfr: [0]=0x10110222 [1]=0x11111111
cpu1 at mainbus0 core 1
armperiph0 at mainbus0
armgic0 at armperiph0: Generic Interrupt Controller, 160 sources (151 valid)
armgic0: 32 Priorities, 128 SPIs, 7 PPIs, 16 SGIs
armgtmr0 at armperiph0: ARMv7 Generic 64-bit Timer (24000 kHz)
armgtmr0: interrupting on irq 27
timecounter: Timecounter "armgtmr0" frequency 24000000 Hz quality 500
awinio0 at mainbus0: A20 (0x1651)
awingpio0 at awinio0
awindma0 at awinio0: DMA
awindma0: interrupting on irq 59
awincnt0 at awinio0
timecounter: Timecounter "CNT64" frequency 24000000 Hz quality 900
com0 at awinio0 port 0: ns16550a, working fifo
com0: console
awindebe0 at awinio0 port 0: Display Engine Backend (BE0)
awintcon0 at awinio0 port 0: LCD/TV timing controller (TCON0)
awinhdmi0 at awinio0: HDMI 1.3
awinwdt0 at awinio0: default period is 10 seconds
awinrtc0 at awinio0: RTC
awinusb0 at awinio0 port 0
awinusb0: no restrict gpio found
ohci0 at awinusb0: OHCI USB controller
ohci0: OHCI version 1.0
usb0 at ohci0: USB revision 1.0
ohci0: interrupting on irq 96
ehci0 at awinusb0: EHCI USB controller
ehci0: EHCI version 1.0
ehci0: companion controller, 1 port each: ohci0
usb1 at ehci0: USB revision 2.0
ehci0: interrupting on irq 71
awinusb1 at awinio0 port 1
awinusb1: no restrict gpio found
ohci1 at awinusb1: OHCI USB controller
ohci1: OHCI version 1.0
usb2 at ohci1: USB revision 1.0
ohci1: interrupting on irq 97
ehci1 at awinusb1: EHCI USB controller
ehci1: EHCI version 1.0
ehci1: companion controller, 1 port each: ohci1
usb3 at ehci1: USB revision 2.0
ehci1: interrupting on irq 72
motg0 at awinio0: OTG
motg0: interrupting at irq 70
motg0: no restrict gpio found
motg0: Dynamic FIFO sizing detected, assuming 16Kbytes of FIFO RAM
usb4 at motg0: USB revision 2.0
awinmmc0 at awinio0 port 0: SD3.0 (DMA)
awinmmc0: interrupting at irq 64
ahcisata0 at awinio0: AHCI SATA controller
ahcisata0: interrupting on irq 88
ahcisata0: ignoring broken port multiplier support
ahcisata0: AHCI revision 1.10, 1 port, 32 slots, CAP 0x6f24ff80<CCCS,PSC,SSC,PMD,SAM,ISS=0x2=Gen2,SCLO,SAL,SALP,SSS,SSNTF,SNCQ>
atabus0 at ahcisata0 channel 0
awiniic0 at awinio0 port 0: Marvell TWSI controller
awiniic0: interrupting on irq 39
iic0 at awiniic0: I2C bus
awge0 at awinio0: Gigabit Ethernet Controller
awge0: interrupting on irq 117
awge0: Ethernet address: 02:0a:09:03:27:08
rlphy0 at awge0 phy 1: RTL8201L 10/100 media interface, rev. 1
rlphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
awinac0 at awinio0: CODEC
audio0 at awinac0: full duplex, playback, capture, mmap, independent
awinhdmiaudio0 at awinio0: HDMI 1.3
audio1 at awinhdmiaudio0: half duplex, playback, mmap
awinnand0 at awinio0
awinir0 at awinio0 port 0: IR
awinir0: interrupting on irq 37
cir0 at awinir0
gpio0 at awingpio0: 18 pins
gpio1 at awingpio0: 25 pins
gpio2 at awingpio0: 28 pins
gpio3 at awingpio0: 12 pins
gpio4 at awingpio0: 12 pins
gpio5 at awingpio0: 28 pins
gpio6 at awingpio0: 22 pins
timecounter: Timecounter "clockinterrupt" frequency 100 Hz quality 0
cpu1: 912 MHz Cortex-A7 r0p4 (Cortex V7A core)
cpu1: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu1: isar: [0]=0x2101110 [1]=0x13112111 [2]=0x21232041 [3]=0x11112131, [4]=0x10011142, [5]=0
cpu1: mmfr: [0]=0x10101105 [1]=0x40000000 [2]=0x1240000 [3]=0x2102211
cpu1: pfr: [0]=0x1131 [1]=0x11011
cpu1: 32KB/32B 2-way L1 VIPT Instruction cache
cpu1: 32KB/64B 4-way write-back-locking-C L1 PIPT Data cache
cpu1: 256KB/64B 8-way write-through L2 PIPT Unified cache
vfp1 at cpu1: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
vfp1: mvfr: [0]=0x10110222 [1]=0x11111111
sdmmc0 at awinmmc0
uhub0 at usb0: Allwinner OHCI root hub, class 9/0, rev 1.00/1.00, addr 1
uhub0: 1 port with 1 removable, self powered
uhub1 at usb1: Allwinner EHCI root hub, class 9/0, rev 2.00/1.00, addr 1
uhub1: 1 port with 1 removable, self powered
uhub2 at usb2: Allwinner OHCI root hub, class 9/0, rev 1.00/1.00, addr 1
uhub2: 1 port with 1 removable, self powered
uhub3 at usb3: Allwinner EHCI root hub, class 9/0, rev 2.00/1.00, addr 1
uhub3: 1 port with 1 removable, self powered
uhub4 at usb4: Mentor Graphics MOTG root hub, class 9/0, rev 1.00/1.00, addr 1
uhub4: 1 port with 1 removable, self powered
ld0 at sdmmc0: <0x02:0x544d:SA08G:0x14:0x12436e27:0x0e6>
ld0: 7388 MB, 3752 cyl, 64 head, 63 sec, 512 bytes/sect x 15130624 sectors
ld0: 4-bit width, bus clock 50.000 MHz
boot device: ld0
root on ld0a dumps on ld0b
root file system type: ffs

June 30, 2016

Chris Pinnock A free Time Machine

I’ve been itching to go wireless on my office desk for sometime. The final wires to eradicate are from my Mac into a USB hub connected to two hard discs for backups. Years ago I had an Apple Time Capsule. The Time Capsule is an Airport Wi-Fi basestation with a hard disc for Macs to back up to using the Time Machine backup software. It was pretty solid kit for a couple of years. Under the hood, it runs NetBSD and as an aside, I have had a few beers with the guy who ported the operating system. The power supply decided to give up – a very common fault apparently.


I will clean the cables up. I promise.

When I was on my travels and living in two places, I had hard discs in both locations. The Mac supports multiple discs for backups and I encrypted the backups in case the discs were stolen. But now I’m in one home, I want to be able to move around the house with the Mac but still backup without having to go to the office. We are a two Mac house, so we need something more convenient.

I already have a base station and I don’t really want to shell out loads of money for an Apple one. There are several options to setup a Time Capsule equivalent.

  1. If you have a spare Mac, get a copy of Mac OS X Server. It will support Time Machine backups for multiple Macs and also supports quotas so that the size of the backups can be controlled. I don’t have a spare stationary Mac.
  2. Anything that speaks Appletalk file sharing protocol reasonably well.

Enter the Raspberry Pi. I have a Raspberry Pi 3 and within minutes one can install the Netatalk software. This has been available for years on Linux and implements the Apple file sharing protocols really well. With an external drive added, I was able to get a Time Machine backup working using this article.

I could not use my existing backup drive as is. Linux will read and write Mac OS drives, but there is a bit of too-ing and fro-ing so it is best to start with a fresh native Linux filesystem. Even if you can get it to work with the Mac OS drive, it will not be able to use a Time Machine backup from a drive previously directly connected.

I’ve been using this setup for the last couple of weeks. I have not had to do a serious restore yet and I should caveat that I still have a hard drive I use directly into the machine just in case. The first rule of backups – a file doesn’t exist unless there are three copies on different physical media.

(The Raspberry Pi is setup to be MiniDLNA server. It will stream media to Xbox’s and other media players.)