NetBSD Planet


December 14, 2019

DragonFly BSD Digest In Other BSDs for 2019/12/14

Lots of variety this week.


December 13, 2019

Super User NetBSD - no pkg

After full installation of latest NetBSD I'm tried to launch pkgin, but received pkgin not found, also I've got same for pkgsrc. Then I've found, that there's no /usr/pkg location.

That's normal or I've did something wrong?


December 12, 2019

NetBSD Blog Clang build bot now uses two-stage builds, and other LLVM/LLDB news

Upstream describes LLDB as a next generation, high-performance debugger. It is built on top of LLVM/Clang toolchain, and features great integration with it. At the moment, it primarily supports debugging C, C++ and ObjC code, and there is interest in extending it to more languages.

In February, I have started working on LLDB, as contracted by the NetBSD Foundation. So far I've been working on reenabling continuous integration, squashing bugs, improving NetBSD core file support, extending NetBSD's ptrace interface to cover more register types and fix compat32 issues, and fixing watchpoint support. In October 2019, I've finished my work on threading support (pending pushes) and fought issues related to upgrade to NetBSD 9.

November was focused on finally pushing the aforementioned patches and major buildbot changes. Notably, I was working on extending the test runs to compiler-rt which required revisiting past driver issues, as well as resolving new ones. More details on this below.

LLDB changes

Test updates, minor fixes

The previous month has left us with a few regressions caused by the kernel upgrade. I've done my best to figure out those I could reasonably fast; for the remaining ones Kamil suggested that I mark them XFAIL for now and revisit them later while addressing broken tests. This is what I did.

While implementing additional tests in the threading patches, I've discovered that the subset of LLDB tests dedicated to testing lldb-server behavior was disabled on NetBSD. I've reenabled lldb-server tests and marked failing tests appropriately.

After enabling and fixing those tests, I've implemented missing support in the NetBSD plugin for getting thread name.

I've also switched our process plugin to use the newer PT_STOP request over calling kill(). The main advantage of PT_STOP is that it reliably notifies about SIGSTOP via wait() even if the process is stopped already.

I've been able to reenable EOF detection test that was previously disabled due to bugs in the old versions of NetBSD 8 kernel.

Threading support pushed

After satisfying the last upstream requests, I was able to merge the three threading support patches:

  1. basic threading support,

  2. watchpoint support in threaded programs,

  3. concurrent watchpoint fixes.

This fixed 43 tests. It also triggered some flaky tests and a known regression and I'm planning to address them as the part of final bug cracking.

Build bot redesign

Recap of the problems

The tests of clang runtime components (compiler-rt, openmp) are performed using freshly built clang. This version of clang attempts to build and link C++ programs with libc++. However, our clang driver naturally requires system installation of libc++ — after all, we normally don't want the driver to include temporary build paths for regular executables! For this reason, building against fresh libc++ in build tree requires appropriate -cxx-isystem, -L and -Wl,-rpath flags.

So far, we managed to resolve this via using existing mechanisms to add additional flags to the test compiler calls. However, the existing solutions do not seem to suffice for compiler-rt. While technically I could work on adding more support code for that, I've decided it's better to look for a more general and permanent solution.

Two-stage builds

As part of the solution, I've proposed to switch our build bot to a two-stage build model. That is, firstly we're using the system GCC version to build a minimal functioning clang. Then, we're using this newly-built clang to build the whole LLVM suite, including another copy of clang.

The main advantage of this model is that we're verifying whether clang is capable of building a working copy of itself. Additionally, it insulates us against problems with host GCC. For example, we've experienced issues with GCC 8 and the default -O3. On the negative side, it increases build time significantly, especially that the second stage needs to be rebuilt from scratch every time.

A common practice in compiler world is to actually do three stages. In this case, it would mean building minimal clang with host compiler, then second stage with first stage clang, then third stage using second stage's clang. This would have the additional benefit of verifying that clang is capable of building a compiler that's fully capable of building itself. However, this seems to have little actual gain for us while it would increase the build time even more.

Compiler wrappers

Another interesting side effect of using the two-stage build model is that it proves an opportunity of injecting wrappers over clang and clang++ built in the first stage. Those wrappers allows us to add necessary -I, -L and -Wl,-rpath arguments without having to patch the driver for this special case.

Furthermore, I've used this opportunity to add experimental LLD usage to the first stage, and use it instead of GNU ld for the second stage. The LLVM linker has a significantly smaller memory footprint and therefore allows us to improve build efficiency. Sadly, proper LLD support for NetBSD still depends on patches that are waiting for upstream review.

Compiler-rt status and tests

The builds of compiler-rt have been reenabled for the build bot. I am planning to start enabling individual test groups (e.g. builtins, ASAN, MSAN, etc.) as I get them to work. However, there are still other problems to be resolved before that happens.

Firstly, there are new test regressions. Some of them seem to be specifically related to build layout changes, or to use of LLD as linker. I am currently investigating them.

Secondly, compiler-rt tests aim to test all supported multilib targets by default. We are currently preparing to enable compat32 in the kernel on the host running build bot and therefore achieve proper multilib suppor for running them.

Thirdly, ASAN, MSAN and TSAN are incompatible with ASLR (address space layout randomization) that is enabled by default on NetBSD. Furthermore, XRay is incompatible with W^X restriction.

Making tests work with PaX features

Previously, we've already addressed the ASLR incompatibility by adding an explicit check for it and bailing out if it's enabled. However, while this somehow resolves the problem for regular users, it means that the relevant tests can't be run on hosts having ASLR enabled.

Kamil suggested that we should use paxctl to disable ASLR per-executable here. This has the obvious advantage that it enables the tests to work on all hosts. However, it required injecting the paxctl invocation between the build and run step in relevant tests.

The ‘obvious’ solution to this problem would be to add a kind of %paxctl_aslr substitution that evaluates to paxctl call on NetBSD, and to : (no-op) on other systems. However, this required updating all the relevant tests and making sure that the invocation keeps being included in new tests.

Instead, I've noticed that the %run substitution is already using various kinds of wrappers for other targets, e.g. to run tests via an emulator. I went for a more agreeable solution of substituting %run in appropriate test suites with a tiny wrapper calling paxctl before executing the test.

Clang/LLD dependent libraries feature

Introduction to the feature

Enabling the two stage builds had also another side effect. Since stage 2 build is done via clang+LLD, a newly added feature of dependent libraries got enabled and broke our build.

Dependent libraries are a feature permitting source files to specify additional libraries that are afterwards injected into linker's invocation. This is done via a #pragma originally used by MSVC. Consider the following example:

#include <stdio.h>
#include <math.h>
#pragma comment(lib, "m")

int main() {
    printf("%f\n", pow(2, 4.3));
    return 0;
}

When the source file is compiled using Clang on an ELF target, the lib comments are converted into .deplibs object section:

$ llvm-readobj -a --section-data test.o
[...]
  Section {
    Index: 6
    Name: .deplibs (25)
    Type: SHT_LLVM_DEPENDENT_LIBRARIES (0x6FFF4C04)
    Flags [ (0x30)
      SHF_MERGE (0x10)
      SHF_STRINGS (0x20)
    ]
    Address: 0x0
    Offset: 0x94
    Size: 2
    Link: 0
    Info: 0
    AddressAlignment: 1
    EntrySize: 1
    SectionData (
      0000: 6D00                                 |m.|
    )
  }
[...]

When the objects are linked into a final executable using LLD, it collects all libraries from .deplibs sections and links to the specified libraries.

The example program pasted above would have to be built on systems requiring explicit -lm (e.g. Linux) via:

$(CC) ... test.c -lm

However, when using Clang+LLD, it is sufficient to call:

clang -fuse-ld=lld ... test.c

and the library is included automatically. Of course, this normally makes little sense because you have to maintain compatibility with other compilers and linkers, as well as old versions of Clang and LLD.

Use of LLVM to approach static library dependency problem

LLVM started using the deplibs feature internally in D62090 in order to specify linkage between runtimes and their dependent libraries. Apparently, the goal was to provide an in-house solution to the static library dependency problem.

The problem discussed is that static libraries on Unix-derived platforms are primitive archives containing object files. Unlike shared libraries, they do not contain lists of other libraries they depend on. As a result, when linking against a static library, the user needs to explicitly pass all the dependent libraries to the linker invocation.

Over years, a number of workarounds were proposed to relieve the user (or build system) from having to know the exact dependencies of the static libraries used. A few worth noting include:

The first two solutions work at build system level, and therefore are portable to different compilers and linkers. The third one requires linker support but have been used successfully to some degree due to wide deployment of GNU binutils, as well as support in other linkers (e.g. LLD).

Dependent libraries provide yet another attempt to solve the same problem. Unlike the listed approaches, it is practically transparent to the static library format — at the cost of requiring both compiler and linker support. However, since the runtimes are normally supposed to be used by Clang itself, at least the first of the points can be normally assumed to be satisfied.

Why it broke NetBSD?

After all the lengthy introduction, let's get to the point. As a result of my changes, the second stage is now built using Clang/LLD. However, it seems that the original change making use of deplibs in runtimes was tested only on Linux — and it caused failures for us since it implicitly appended libraries not present on NetBSD.

Over time, users of a few other systems have added various #ifdefs in order to exclude Linux-specific libraries from their systems. However, this solution is hardly optimal. It requires us to maintain two disjoint sets of rules for adding each library — one in CMake for linking of shared libraries, and another one in the source files for emitting dependent libraries.

Since dependent libraries pragmas are present only in source files and not headers, I went for a different approach. Instead of using a second set of rules to decide which libraries to link, I've exported the results of CMake checks into -D flags, and made dependent libraries conditional on CMake check results.

Firstly, I've fixed deplibs in libunwind in order to fix builds on NetBSD. Afterwards, per upstream's request I've extended the deplibs fix to libc++ and libc++abi.

Future plans

I am currently still working on fixing regressions after the switch to two-stage build. As things develop, I am also planning to enable further test suites there.

Furthermore, I am planning to continue with the items from the original LLDB plan. Those are:

  1. Add support to backtrace through signal trampoline and extend the support to libexecinfo, unwind implementations (LLVM, nongnu). Examine adding CFI support to interfaces that need it to provide more stable backtraces (both kernel and userland).

  2. Add support for i386 and aarch64 targets.

  3. Stabilize LLDB and address breaking tests from the test suite.

  4. Merge LLDB with the base system (under LLVM-style distribution).

This work is sponsored by The NetBSD Foundation

The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue funding projects and services to the open-source community. Please consider visiting the following URL to chip in what you can:

https://netbsd.org/donations/#how-to-donate


December 07, 2019

DragonFly BSD Digest In Other BSDs for 2019/12/07

Accidental theme: BUGs BUGs BUGs and also happy birthday me!  It’s a bit brief cause like usual I am working extra.

 


December 03, 2019

Roy Marples dhcpcd-ui-0.7.7 released

dhcpcd-0.7.7 has been released with the following changes:

ftp://roy.marples.name/pub/dhcpcd/dhcpcd-ui-0.7.7.tar.xz
http://roy.marples.name/downloads/dhcpcd/dhcpcd-ui-0.7.7.tar.xz


December 02, 2019

NetBSD Blog First release candidate for NetBSD 9.0 available!

Since the start of the release process four months ago a lot of improvements went into the branch - more than 500 pullups were processed!

This includes usbnet (a common framework for usb ethernet drivers), aarch64 stability enhancements and lots of new hardware support, installer/sysinst fixes and changes to the NVMM (hardware virtualization) interface.

We hope this will lead to the best NetBSD release ever (only to be topped by NetBSD 10 next year).

Here are a few highlights of the new release:

You can download binaries of NetBSD 9.0_RC1 from our Fastly-provided CDN.

For more details refer to the official release announcement.

Please help us out by testing 9.0_RC1. We love any and all feedback. Report problems through the usual channels (submit a PR or write to the appropriate list). More general feedback is welcome, please mail releng. Your input will help us put the finishing touches on what promises to be a great release!

Enjoy!

Martin


December 01, 2019

Server Fault ssh tunnel refusing connections with "channel 2: open failed"

All of a sudden (read: without changing any parameters) my netbsd virtualmachine started acting oddly. The symptoms concern ssh tunneling.

From my laptop I launch:

$ ssh -L 7000:localhost:7000 [email protected] -N -v

Then, in another shell:

$ irssi -c localhost -p 7000

The ssh debug says:

debug1: Connection to port 7000 forwarding to localhost port 7000 requested.
debug1: channel 2: new [direct-tcpip]
channel 2: open failed: connect failed: Connection refused
debug1: channel 2: free: direct-tcpip: listening port 7000 for localhost port 7000, connect from 127.0.0.1 port 53954, nchannels 3

I tried also with localhost:80 to connect to the (remote) web server, with identical results.

The remote host runs NetBSD:

bash-4.2# uname -a
NetBSD host 5.1_STABLE NetBSD 5.1_STABLE (XEN3PAE_DOMU) #6: Fri Nov  4 16:56:31 MET 2011  [email protected]:/m/obj/m/src/sys/arch/i386/compile/XEN3PAE_DOMU i386

I am a bit lost. I tried running tcpdump on the remote host, and I spotted these 'bad chksum':

09:25:55.823849 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 67, bad cksum 0 (->3cb3)!) 127.0.0.1.54381 > 127.0.0.1.7000: P, cksum 0xfe37 (incorrect (-> 0xa801), 1622402406:1622402421(15) ack 1635127887 win 4096 <nop,nop,timestamp 5002727 5002603>

I tried restarting the ssh daemon to no avail. I haven't rebooted yet - perhaps somebody here can suggest other diagnostics. I think it might either be the virtual network card driver, or somebody rooted our ssh.

Ideas..?

NetBSD.org New Developer in November 2019

November 30, 2019

DragonFly BSD Digest In Other BSDs for 2019/11/30

End of year events are starting to get scheduled; watch for one near you.


November 29, 2019

DragonFly BSD Digest Circular cross-pollination

I sorta like seeing these things ricochet back and forth.


November 27, 2019

NetBSD Blog Debugging FFS Mount Failures
This report was written by Maciej Grochowski as a part of developing the AFL+KCOV project.

This report is a continuation of my previous work on Fuzzing Filesystems via AFL. You can find previous posts where I described the fuzzing (part1, part2) or my EuroBSDcon presentation.
In this part, we won't talk too much about fuzzing itself but I want to describe the process of finding root causes of File system issues and my recent work trying to improve this process.
This story begins with a mount issue that I found during my very first run of the AFL, and I presented it during my talk on EuroBSDcon in Lillehammer.

Invisible Mount point

afl-fuzz: /dev/vnd0: opendisk: Device busy That was the first error that I saw on my setup after couple of seconds of AFL run.
I was not sure what exactly was the problem and thought that mount wrapper might cause a problem.
Although after a long troubleshooting session I realized that this might be my first found issue.
To give the reader a better understanding of the problem without digging too deeply into fuzzer setup or mount process.
Let's assume that we have some broken file system image exposed as a block device visible as a /dev/wd1a.

The device can be easily mounted on mount point mnt1, however when we try to unmount it we get an error: error: ls: /mnt1: No such file or directory, and if we try to use raw system call unmount(2) it also end up with the similar error.

However, we can see clearly that the mount point exists with the mount command:

# mount
/dev/wd0a on / type ffs(local)
...
tmpfson /var/shmtype tmpfs(local)
/dev/vnd0 on /mnt1 type ffs(local)

Thust any lstat(2) based command is trying to convince us that no such directory exists.

# ls / | grep mnt
mnt
mnt1

# ls -alh /mnt1
ls: /mnt1: No such file or directory
# stat /mnt1
stat: /mnt1: lstat: No such file or directory

To understand what is happening we need to dig a little bit deeper than with standard bash tools.
First of all mnt1 is a folder created on the root partition at a local filesystem so getdents(2) or dirent(3) should show it as a entry inside dentry structure on the disk.
Raw getdents syscall is great tool for checking directory content because it reads the data from the directory structure on disk.

# ./getdents  /
|inode_nr|rec_len|file_type|name_len(name)|
#:   2,      16,    IFDIR,       1 (.)
#:   2,      16,    IFDIR,       2 (..)
#:   5,      24,    IFREG,       6 (.cshrc)
#:   6,      24,    IFREG,       8 (.profile)
#:   7,      24,    IFREG,       8 (boot.cfg)
#: 3574272,  24,    IFDIR,       3 (etc)
...
#: 3872128,  24,    IFDIR,       3 (mnt)
#: 5315584,  24,    IFDIR,       4 (mnt1)

Getdentries confirms that we have mnt1 as a directory inside the root of our system fs.
But, we cannot execute lstat, unmount or any other system-call that require a path to this file.
A quick look on definitions of these system calls show their structure:

unmount(const char *dir, int flags);
stat(const char *path, struct stat *sb);
lstat(const char *path, struct stat *sb);
open(const char *path, int flags, ...);

All of these function take as an argument path to the file, which as we know will endup in vfs lookup.
How about something that uses filedescryptor? Can we even obtain it?
As we saw earlier running open(2) on path also returns EACCES.
Looks like without digging inside VFS lookup we will not be able to understand the issue.

Get Filesystem Root

After some debugging and code walk I found the place that caused error.
VFS during the name resolution needs to check and switch FS in case of embedded mount points.
After the new filesystem is found VFS_ROOT is issued on that particular mount point.
VFS_ROOT is translated in case of FFS to the ufs_root which calls vcache with fixed value equal to the inode number of root inode which is 2 for UFS.

#define UFS_ROOTINO     ((ino_t)2)  

Below listning with the code of ufs_root from ufs/ufs/ufs_vfsops.c.

int
ufs_root(struct mount *mp, struct vnode **vpp)
{
...
        if ((error = VFS_VGET(mp, (ino_t)UFS_ROOTINO, &nvp)) != 0)
               return (error);

By using the debugger, I was able to make sure that the entry with number 2 after hashing does not exist in the vcache.
As a next step, I wanted to check the Root inode on the given filesystem image.
Filesystem debuggers are good tools to do such checks. NetBSD comes with FSDB which is general-purpose filesystem debugger.
Nonetheless, by default FSDB links against fsck_ffs which makes it tied to the FFS.

Filesystem Debugger for the help!

Filesystem debugger is a tool designed to browse on-disk structure and values of particular entries. It helps in understanding the Filesystems issues by giving particular values that the system reads from the disk. Unfortunately, current fsdb_ffs is a bit limited in the amount of information that it exposes.
Example output of trying to browse damaged root inode on corrupted FS.

# fsdb -dnF -f ./filesystem.out

** ./filesystem.out (NO WRITE)
superblock mismatches
...
BAD SUPER BLOCK: VALUES IN SUPER BLOCK DISAGREE WITH THOSE IN FIRST ALTERNATE                                     
clean = 0
isappleufs = 0, dirblksiz = 512
Editing file system `./filesystem.out'
Last Mounted on /mnt
current inode 2: unallocated inode

fsdb (inum: 2)> print
command `print
'
current inode 2: unallocated inode

FSDB Plugin: Print Formatted

Fortunately, fsdb_ffs leaves all necessary interfaces to allows accessing this data with small effort.
I implemented a simple plugin that allows browsing all values inside: inodes, superblock and cylinder groups on FFS. There are still a couple of todos that have to be finished, but the current version allows us to review inodes.

fsdb (inum: 2)> pf inode number=2 format=ufs1
command `pf inode number=2 format=ufs1
'
Disk format ufs1inode 2 block: 512
 ---------------------------- 
di_mode: 0x0                    di_nlink: 0x0
di_size: 0x0                    di_atime: 0x0
di_atimensec: 0x0               di_mtime: 0x0
di_mtimensec: 0x0               di_ctime: 0x0
di_ctimensec: 0x0               di_flags: 0x0
di_blocks: 0x0                  di_gen: 0x6c3122e2
di_uid: 0x0                     di_gid: 0x0
di_modrev: 0x0
 --- inode.di_oldids ---

We can see that the Filesystem image got wiped out most of the root inode fields.
For comparison, if we will take a look at root inode from freshly created FS we will see the proper structure.
Based on that we can quickly realize that fields: di_mode, di_nlink, di_size, di_blocks are different and can be the root cause.

Disk format ufs1 inode: 2 block: 512
 ---------------------------- 
di_mode: 0x41ed                 di_nlink: 0x2
di_size: 0x200                  di_atime: 0x0
di_atimensec: 0x0               di_mtime: 0x0
di_mtimensec: 0x0               di_ctime: 0x0
di_ctimensec: 0x0               di_flags: 0x0
di_blocks: 0x1                  di_gen: 0x68881d2c
di_uid: 0x0                     di_gid: 0x0
di_modrev: 0x0
 --- inode.di_oldids ---

From FSDB and incore to source code

First we will summarize what we already know:

  1. unmount fails in namei operation failure due to the corrupted FS
  2. Filesystem has corrupted root inode
  3. Corrupted root inode has fields: di_mode, di_nlink, di_size, di_blocks set to zero

Now we can find a place where inodes are loaded from the disk, this function for FFS is ffs_init_vnode(ump, vp, ino);.
This function is called during the loading vnode in vfs layer inside ffs_loadvnode.
Quick walkthrough through ffs_loadvnode expose the usage of the field i_mode:

         error = ffs_init_vnode(ump, vp, ino);                                                                                                                                                                                     
         if (error)                                                                                                                                                                                                                
                return error;                                                                                                                                                                                                     
                                                                                                                                                                                                                                   
         ip = VTOI(vp);                                                                                                                                                                                                            
         if (ip->i_mode == 0) {                                                                                                                                                                                                    
                 ffs_deinit_vnode(ump, vp);                                                                                                                                                                                        
                                                                                                                                                                                                                                   
                 return ENOENT;                                                                                                                                                                                                    
         }   

This seems to be a source of our problem. Whenever we are loading inode from disk to obtain the vnode, we validate if i_mode is non zero.
In our case root inode is wiped out, what results that vnode is dropped and an error returned.
So simply we cannot load any inode with i_mode set to the zero, inode number 2 called root is no different here. Due to that the VFS_LOADVNODE operation always fails, so lookup does and name resolution will return ENOENT error. To fix this issue we need a root inode validation on mount step, I created such validation and tested against corrupted filesystem image.
The mount return error, which proved the observation that such validation would help.

Conclusions

The following post is a continuation of the project: "Fuzzing Filesystems with kcov and AFL".
I presented how fuzzed bugs, which do not always show up as system panics, can be analyzed, and what tools a programmer can use.
Above the investigation described the very first bug that I found by fuzzing mount(2) with Afl+kcov.
During that root cause analysis, I realized the need for better tools for debugging Filesystem related issues.
Because of that reason, I added small functionality pf (print-formatted) into the fsdb(8), to allow walking through the on-disk structures. The described bug was reported with proposed fix based on validation of the root inode on kern-tech mailing list.

Future work

  1. Tools: I am still progressing with the fuzzing of mount process, however, I do not only focus on the finding bugs but also on tools that can be used for debugging and also doing regression tests. I am planning to add better support for browsing blocks on inode into the fsdb-pf, as well as write functionality that would allow more testing and potential recovery easier.
  2. Fuzzing: In next post, I will show a remote setup of AFL with an example of usage.
  3. I got a suggestion to take a look at FreeBSD UFS security checks on mount(2) done by McKusick. I think is worth it to see what else is validated and we can port to NetBSD FFS.

November 26, 2019

NetBSD.org New Security Advisory: NetBSD-SA2019-005

November 25, 2019

Super User What device does NetBSD use for a USB modem?

I'm testing some software on NetBSD 8.1 x86_64. The software opens a USB modem and issues AT commands. The software tested OK on Debian, Fedora, OS X, and OpenBSD. The software is having trouble on NetBSD.

NetBSD's dmesg shows:

umodem0 at uhub1 port 1 configuration 2 interface 0
umodem0: U.S.Robotics (0xbaf) USB Modem (0x303), rev 2.00/2.00, addr 2, iclass 2/2
umodem0: data interface 1, has CM over data, has break
umodem0: status change notification available
ucom0 at umodem0

If I am parsing the NetBSD man pages properly (which may not be the case), I should be able to access the modem via /dev/ucom0. Also see UMODEM(4) man page.

The test user is part of the dialer group. The software was not able to open /dev/ucom0, /dev/umodem0, ucom0 or umodem0. All open's result in No such file or directory. Additionally, there are no /dev/ttyACMn or /dev/cuaUn devices.

How do I access the modem on NetBSD?


November 23, 2019

DragonFly BSD Digest In Other BSDs for 2019/11/23

Read that last link, if only to make your convention-going safer in the future.


November 20, 2019

NetBSD Blog Board of Directors and Officers elected

Per the membership voting, we have seated the new Board of Directors of the NetBSD Foundation:

We would like to thank Makoto Fujiwara <[email protected]> and Jeremy C. Reed <[email protected]> for their service on the Board of Directors during their term(s).

The new Board of Directors have voted in the executive officers for The NetBSD Foundation:

President:William J. Coldwell
Vice President: Pierre Pronchery
Secretary: Christos Zoulas
Assistant Secretary: Thomas Klausner
Treasurer: Christos Zoulas
Assistant Treasurer: Taylor R. Campbell

Thanks to everyone that voted and we look forward to a great 2020.


November 17, 2019

Stack Overflow Compile only kernel module on NetBSD

Is there a way to compile only a kernel module on NetBSD? I can't seem to figure out how to do so without recompiling the entire kernel. Thanks!


November 13, 2019

Roy Marples dhcpcd-8.1.2 released

dhcpcd-8.1.2 has been released with the following changes:

The last issue was the cause of the recent report about dhcpcd pegging a CPU, so this is quite an important upgrade from 8.1.1

ftp://roy.marples.name/pub/dhcpcd/dhcpcd-8.1.2.tar.xz
https://roy.marples.name/downloads/dhcpcd/dhcpcd-8.1.2.tar.xz


November 09, 2019

NetBSD Blog LLDB Threading support now ready for mainline

Upstream describes LLDB as a next generation, high-performance debugger. It is built on top of LLVM/Clang toolchain, and features great integration with it. At the moment, it primarily supports debugging C, C++ and ObjC code, and there is interest in extending it to more languages.

In February, I have started working on LLDB, as contracted by the NetBSD Foundation. So far I've been working on reenabling continuous integration, squashing bugs, improving NetBSD core file support, extending NetBSD's ptrace interface to cover more register types and fix compat32 issues and fixing watchpoint support. Then, I've started working on improving thread support which is taking longer than expected. You can read more about that in my September 2019 report.

So far the number of issues uncovered while enabling proper threading support has stopped me from merging the work-in-progress patches. However, I've finally reached the point where I believe that the current work can be merged and the remaining problems can be resolved afterwards. More on that and other LLVM-related events happening during the last month in this report.

LLVM news and buildbot status update

LLVM switched to git

Probably the most important event to note is that the LLVM project has switched from Subversion to git, and moved their repositories to GitHub. While the original plan provided for maintaining the old repositories as read-only mirrors, as of today this still hasn't been implemented. For this reason, we were forced to quickly switch buildbot to the git monorepo.

The buildbot is operational now, and seems to be handling git correctly. However, it is connected to the staging server for the time being. Its URL changed to http://lab.llvm.org:8014/builders/netbsd-amd64 (i.e. the port from 8011 to 8014).

Monthly regression report

Now for the usual list of 'what they broke this time'.

LLDB has been given a new API for handling files, in particular for passing them to Python scripts. The change of API has caused some 'bad file descriptor' errors, e.g.:

ERROR: test_SBDebugger (TestDefaultConstructorForAPIObjects.APIDefaultConstructorTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/data/motus/netbsd8/netbsd8/llvm/tools/lldb/packages/Python/lldbsuite/test/decorators.py", line 343, in wrapper
    return func(self, *args, **kwargs)
  File "/data/motus/netbsd8/netbsd8/llvm/tools/lldb/packages/Python/lldbsuite/test/python_api/default-constructor/TestDefaultConstructorForAPIObjects.py", line 133, in test_SBDebugger
    sb_debugger.fuzz_obj(obj)
  File "/data/motus/netbsd8/netbsd8/llvm/tools/lldb/packages/Python/lldbsuite/test/python_api/default-constructor/sb_debugger.py", line 13, in fuzz_obj
    obj.SetInputFileHandle(None, True)
  File "/data/motus/netbsd8/netbsd8/build/lib/python2.7/site-packages/lldb/__init__.py", line 3890, in SetInputFileHandle
    self.SetInputFile(SBFile.Create(file, borrow=True))
  File "/data/motus/netbsd8/netbsd8/build/lib/python2.7/site-packages/lldb/__init__.py", line 5418, in Create
    return cls.MakeBorrowed(file)
  File "/data/motus/netbsd8/netbsd8/build/lib/python2.7/site-packages/lldb/__init__.py", line 5379, in MakeBorrowed
    return _lldb.SBFile_MakeBorrowed(BORROWED)
IOError: [Errno 9] Bad file descriptor
Config=x86_64-/data/motus/netbsd8/netbsd8/build/bin/clang-10
----------------------------------------------------------------------

I've been able to determine that the error was produced by flush() method call invoked on a file descriptor referring to stdin. Appropriately, I've fixed the type conversion method not to flush read-only fds.

Afterwards, Lawrence D'Anna was able to find and fix another fflush() issue.

A newly added test revealed that platform process list -v command on NetBSD missed listing the process name. I've fixed it to provide Arg0 in process info.

Another new test failed due to our target not implementing ShellExpandArguments() API. Apparently the only target actually implementing it is Darwin, so I've just marked TestCustomShell XFAIL on all BSD targets.

LLDB upstream was forced to reintroduce readline module override that aims to prevent readline and libedit from being loaded into a single program simultaneously. This module failed to build on NetBSD. I've discovered that the original was meant to be built on Linux only, and since the problem still doesn't affect other platforms, I've made it Linux-only again.

libunwind build has been changed to link using the C compiler rather than C++. This caused some libc++ failures on NetBSD. The author has reverted the change for now, and is looking for a better way of resolving the problem.

Finally, I have disabled another OpenMP test that caused NetBSD to hang. While ideally I'd like to have the underlying kernel problem fixed, this is non-trivial and I prefer to focus on LLDB right now.

New LLD work

I've been asked to rebase my LLD patches for the new code. While doing it, I've finally committed the -z nognustack option patch from January.

In the meantime, Kamil's been working on finally resolving the long-standing impasse on LLD design. He is working on a new NetBSD-specific frontend to LLD that would satisfy our system-wide linker requirements without modifying the standard driver used by other platforms.

Upgrade to NetBSD 9 beta

Our recent work, especially the work on threading support has required a number of fixes in the NetBSD kernel. Those fixes were backported to NetBSD 9 branch but not to 8. The 8 kernel used by the buildbot was therefore suboptimal for testing new features. Furthermore, with the 9.0 release coming soon-ish, it became necessary to start actively testing it for regressions.

The buildbot has been upgraded to NetBSD 9 beta on 2019-11-06. Initially, the upgrade has caused LLDB to start crashing on startup. I have not been able to pinpoint the exact issue yet. However, I've established that it happens with -O3 optimization level only, and I've worked it around by switching the build to -O2. I am planning to look into the problem more once the buildbot is restored fully.

The upgrade to nb9 has caused 4 LLDB tests to start succeeding, and 6 to start failing. Namely:

********************
Unexpected Passing Tests (4):
    lldb-api :: commands/watchpoints/watchpoint_commands/condition/TestWatchpointConditionCmd.py
    lldb-api :: commands/watchpoints/watchpoint_commands/command/TestWatchpointCommandPython.py
    lldb-api :: lang/c/bitfields/TestBitfields.py
    lldb-api :: commands/watchpoints/watchpoint_commands/command/TestWatchpointCommandLLDB.py

********************
Failing Tests (6):
    lldb-shell :: Reproducer/Functionalities/TestExpressionEvaluation.test
    lldb-api :: commands/expression/call-restarts/TestCallThatRestarts.py
    lldb-api :: functionalities/signal/handle-segv/TestHandleSegv.py
    lldb-unit :: tools/lldb-server/tests/./LLDBServerTests/StandardStartupTest.TestStopReplyContainsThreadPcs
    lldb-api :: functionalities/inferior-crashing/TestInferiorCrashingStep.py
    lldb-api :: functionalities/signal/TestSendSignal.py

I am going to start investigating the new failures shortly.

Further LLDB threading work

Fixes to register support

Enabling thread support revealed a problem in register API introspection specific to NetBSD. The API responsible for passing registers in groups to Python was unable to name some of the groups on NetBSD, and the null names have caused the TestRegistersIterator to fail. Threading support made this specifically visible by replacing a regular test failure with Python code error.

In order to resolve the problem, I had to describe all supported register sets in NetBSD register context. The code was roughly based on the Linux equivalent, modified to match register sets used by our ptrace() API. Interestingly, I had to also include MPX registers that are currently unimplemented, as otherwise LLDB implicitly put them in an anonymous group.

While at it, I've also changed the register set numbering to match the more common ordering, in order to avoid issues in the future.

Finished basic thread support patch

I've finally completed and submitted the patch for NetBSD thread support. Besides fixing a few mistakes, I've implemented thread affinity support for all relevant SIGTRAP events (breakpoints, traces, hardware watchpoints) and removed incomplete hardware breakpoint stub that caused LLDB to crash.

In its current form, this patch combines three changes essential to correct support of threaded programs:

  1. It enables reporting of new and exited threads, and maintains debugged thread list based on that.

  2. It modifies the signal (generic and SIGTRAP) handling functions to read the thread identifier and associate the event with correct thread(s). Previously, all events were assigned to all threads.

  3. It updates the process resuming function to support controlling the state (running, single-stepping, stopped) of individual threads, and raising a signal either to the whole process or to a single thread. Previously, the code used only the requested action for the first thread and populated it to all threads in the process.

Proper watchpoint support in multi-threaded programs

I've submitted a separate patch to copy watchpoints to newly-created threads. This is necessary due to the design of Debug Register support in NetBSD. Quoting the ptrace(2) manpage:

  • debug registers are only per-LWP, not per-process globally
  • debug registers must not be inherited after (v)forking a process
  • debug registers must not be inherited after forking a thread
  • a debugger is responsible to set global watchpoints/breakpoints with the debug registers, to achieve this PTRACE_LWP_CREATE / PTRACE_LWP_EXIT event monitoring function is designed to be used

LLDB supports per-process watchpoints only at the moment. To fit this into NetBSD model, we need to monitor new threads and copy watchpoints to them. Since LLDB does not keep explicit watchpoint information at the moment (it relies on querying debug registers), the proposed implementation verbosely copies dbregs from the currently selected thread (all existing threads should have the same dbregs).

Fixed support for concurrent watchpoint triggers

The final problem I've been investigating was a server crash with the new code when multiple watchpoints were triggered concurrently. My final patch aims to fix handling concurrent watchpoint events.

When a watchpoint is triggered, the kernel delivers SIGTRAP with TRAP_DBREG to the debugger. The debugger investigates DR6 register of the specified thread in order to determine which watchpoint was triggered, and reports it. When multiple watchpoints are triggered simultaneously, the kernel reports that as series of successive SIGTRAPs. Normally, that works just fine.

However, on x86 watchpoint triggers are reported before the instruction is executed. For this reason, LLDB temporarily disables the breakpoint, single-steps and reenables it. The problem with that is that the GDB protocol doesn't control watchpoints per thread, so the operation disables and reenables the watchpoint on all threads. As a side effect of this, DR6 is cleared everywhere.

Now, if multiple watchpoints were triggered concurrently, DR6 is set on all relevant threads. However, after handling SIGTRAP on the first one, the disable/reenable (or more specifically, remove/readd) wipes DR6 on all threads. The handler for next SIGTRAP can't establish the correct watchpoint number, and starts looking for breakpoints. Since hardware breakpoints are not implemented, the relevant method returns an error and lldb-server eventually exits.

There are two problems to be solved there. Firstly, lldb-server should not exit in this circumstances. This is already solved in the first patch as mentioned above. Secondly, we need to be able to handle concurrent watchpoint hits independently of the clear/set packets. This is solved by this patch.

There are multiple different approaches to this problem. I've chosen to remodel clear/set watchpoint method in order to prevent it from resetting DR6 if the same watchpoint is being restored, as the alternatives (such as pre-storing DR6 on the first SIGTRAP) have more corner conditions to be concerned about.

The current design of these two methods assumes that the 'clear' method clears both the triggered state in DR6 and control bits in DR7, while the 'set' method sets the address in DR0..3, and the control bits in DR7.

The new design limits the 'clear' method to disabling the watchpoint by clearing the enable bit in DR7. The remaining bits, as well as trigger status and address are preserved. The 'set' method uses them to determine whether a new watchpoint is being set, or the previous one merely reenabled. In the latter case, it just updates DR7, while preserving the previous trigger. In the former, it updates all registers and clears the trigger from DR6.

This solution effectively prevents the disable/reenable logic of LLDB from clearing concurrent watchpoint hits, and therefore makes it possible for the SIGTRAP handler to report them correctly. If the user manually replaces the watchpoint with another one, DR6 is cleared and LLDB does not associate the concurrent trigger to the watchpoint that no longer exists.

Thread status summary

The current version of the patches fixes approximately 47 test failures, and causes approximately 4 new test failures and 2 hanging tests. There is around 7 new flaky tests, related to signals concurrent with breakpoints or watchpoints.

Future plans

The first immediate goal is to investigate and resolve test suite regressions related to NetBSD 9 upgrade. The second goal is to get the threading patches merged, and simultaneously work on resolving the remaining test failures and hangs.

When that's done, I'd like to finally move on with the remaining TODO items. Those are:

  1. Add support to backtrace through signal trampoline and extend the support to libexecinfo, unwind implementations (LLVM, nongnu). Examine adding CFI support to interfaces that need it to provide more stable backtraces (both kernel and userland).

  2. Add support for i386 and aarch64 targets.

  3. Stabilize LLDB and address breaking tests from the test suite.

  4. Merge LLDB with the base system (under LLVM-style distribution).

This work is sponsored by The NetBSD Foundation

The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue funding projects and services to the open-source community. Please consider visiting the following URL to chip in what you can:

https://netbsd.org/donations/#how-to-donate


November 01, 2019

NetBSD.org New Developer in October 2019

October 25, 2019

Stack Overflow How can I make a NetBSD VM halt itself in google compute engine

I've got a batch job that I want to run in google compute engine on a NetBSD instance. I expected that I could just shutdown -hp now at the end of the job and the instance would be powered off. But when I do that it still remains in the running state according to the google cloud console and CLI. How do I make a NetBSD virtual machine in google cloud shut itself off when it's no longer needed?

Note: Google cloud SDK is not available on NetBSD


October 16, 2019

Roy Marples dhcpcd-8.1.1 released

dhcpcd-8.1.1 has been released with the following changes:

The last fix involved a lot a people, quite a few different fixes and played havoc with gcc-9.2 but should now be resolved.

ftp://roy.marples.name/pub/dhcpcd/dhcpcd-8.1.1.tar.xz
https://roy.marples.name/downloads/dhcpcd/dhcpcd-8.1.1.tar.xz


October 14, 2019

Roy Marples dhcpcd added to DragonFlyBSD .... FreeBSD next?

So, dhcpcd was added to DragonFlyBSD almost a year ago. Recently I've become a DragonFlyBSD committer with the express purpose of easing dhcpcd into the role of the default DHCP client.

All of the really needed kernel improvements are now in and dhcpcd doesn't log any more compile warnings, but there is more work to be done such as RFC 5227 support, restarting DaD on link state up and denying the use of an address until validated. I'm quite enjoying working on DragonFlyBSD ... their SMP approach is very interesting and in many ways much easier to work with than NetBSDs fine grained locking approach.

And then out of the blue, a discussion crops up on the FreeBSD mailing list about putting dhcpcd into the FreeBSD base system! This has led into me working on Priviledge Seperation which seems to be the only show stopper for FreeBSD acceptance. I have a reasonable idea on how this should work and hopefully this will be enough.


October 11, 2019

Roy Marples dhcpcd-8.1.0 released

With the following changes:

ftp://roy.marples.name/pub/dhcpcd/dhcpcd-8.1.0.tar.xz
https://roy.marples.name/downloads/dhcpcd/dhcpcd-8.1.0.tar.xz


October 03, 2019

NetBSD.org pkgsrc-2019Q3 released

October 01, 2019

NetBSD.org New Developer in September 2019

August 23, 2019

Unix Stack Exchange NetBSD - Unable to install pkgin

I'm running NetBSD on the Raspberry Pi 1 Model B.

uname -a
NetBSD rpi 7.99.64 NetBSD 7.99.64 (RPI.201703032010Z) evbarm

I'm trying to install pkgin but I'm receiving an error about version mismatch ...

pkg_add -f pkgin
pkg_add: Warning: package `pkgin-0.9.4nb4' was built for a platform:
pkg_add: NetBSD/earmv6hf 7.99.42 (pkg) vs. NetBSD/earmv6hf 7.99.64 (this host)
pkg_add: Warning: package `pkg_install-20160410nb1' was built for a platform:
pkg_add: NetBSD/earmv6hf 7.99.58 (pkg) vs. NetBSD/earmv6hf 7.99.64 (this host)
pkg_add: Can't create pkgdb entry: /var/db/pkg/pkg_install-20160410nb1: Permission denied
pkg_add: Can't install dependency pkg_install>=20130901, continuing
pkg_add: Warning: package `libarchive-3.3.1' was built for a platform:
pkg_add: NetBSD/earmv6hf 7.99.59 (pkg) vs. NetBSD/earmv6hf 7.99.64 (this host)
pkg_add: Can't create pkgdb entry: /var/db/pkg/libarchive-3.3.1: Permission denied
pkg_add: Can't install dependency libarchive>=3.2.1nb2, continuing
pkg_add: Can't create pkgdb entry: /var/db/pkg/pkgin-0.9.4nb4: Permission denied
pkg_add: 1 package addition failed

How can I install the correct version?


August 20, 2019

Super User How to run a Windowed JAR file over SSH without installing JRE and without root access on NetBsd?

First, I can use Java, but for what I want to achieve (building a database where othe only application supporting the format is in Java), I need 100Gb of RAM during 20 hours.

I have access to a server with the required RAM, but not as root and no JRE is available. The same is true for the Xorg libraries.

Here’s the uname :

8.0_STABLE NetBSD 8.0_STABLE (GENERIC) #0: Sun Feb 24 10:50:49 UTC 2019  [email protected]:/usr/src/sys/arch/amd64/compile/GENERIC amd64

The Linux layer is installed, but nothing else is installed : not even Glibc, so the only applications which can be run are the ones which are statically compiled.

So not only Java isn’t Installed, but some of the require shared libraries are missing…
However, I have full write access to my $HOME directory, and I can run my own executables from there.

Is a way to convert a Jar file into a NetBsd Executable or Linux statically linked executable ? I have also the source code of the Jar file if compiling is an acceptable solution.

I only found about ɢᴄᴊ, but I’m unsure if Java 7 is supported…

Amitai Schlair Announcing notqmail

Running my own email server has its challenges. Chief among them: my favorite mail-server software hasn’t been updated since I started using it in 1998.

The qmail logo
qmail logo

Okay, that’s not entirely true. While qmail hasn’t been updated by its original author, a group of respected users created netqmail, a series of tiny updates that were informed, conservative, and careful. By their design, it was safe for everyone running qmail to follow netqmail, so everyone did. But larger changes in the world of email — authentication, encryption, and ever-shifting anti-spam techniques — remained as puzzles for each qmail administrator to solve in their own way. And netqmail hasn’t been updated since 2007.

One fork per person

In the interim, devotees have continued maintaining their own individual qmail forks. Some have shared theirs publicly. I’ve preferred the design constraints of making minimal, purpose-specific, and conflict-avoidant add-ons and patches. Then again, these choices are motivated by the needs of my qmail packaging, which I suppose is itself a de facto fork.

I’ve found this solo work quite satisfying. I’ve learned more C, reduced build-time complexity, added run-time configurability, and published unusually polished and featureful qmail packages for over 20 platforms. Based on these experiences, I’ve given dozens of workshops and talks. In seeking to simplify system administration for myself and others, I’ve become a better programmer and consultant.

Still, wouldn’t it be more satisfying if we could somehow pool our efforts? If, long after the end of DJB’s brilliant one-man show, a handful of us could shift how we relate to this codebase — and to each other — in order to bring a collaborative open-source effort to life? If, with netqmail as inspiration, we could produce safe updates while also evolving qmail to meet more present-day needs?

One fork per community

My subtle artwork
notqmail logo == qmail logo overlaid by a red circle with a slash through it

Say hello to notqmail.

Our first release is informed, conservative, and careful — but bold. It reflects our brand-new team’s rapid convergence on where we’re going and how we’ll get there. In the span of a few weeks, we’ve:

I say “bold” because, for all the ways we intend to hew to qmail tradition, one of our explicit goals is a significant departure. Back in the day, qmail’s lack of license, redistribution restrictions, technical barriers, and social norms made it hard for OS integrators to create packages, and hard for package users to get help. netqmail 1.06 expressed a desire to change this. In notqmail 1.07, we’ve made packaging much easier. (I’ve already updated pkgsrc from netqmail to notqmail, and some of my colleagues have prepared notqmail RPM and .deb packages.) Further improvements for packagers are part of what’s slated for 1.08.

What’s next

Looking much further ahead, another of our explicit goals is “Meeting all common needs with OS-provided packages”. We have a long way to go. But we couldn’t be off to a better start.

By our design, we believe we’ve made it safe for everyone running qmail to follow notqmail. We hope you’ll vet our changes carefully, then update your installations to notqmail 1.07. We hope you’ll start observing us as we continue the work. We hope you’ll discuss freely on the qmail mailing list. We hope you’ll be a part of the qmail revival in ways that are comfortable for you. And we hope that, in the course of time, notqmail will prove to be the community-driven open-source successor to qmail.


August 11, 2019

Unix Stack Exchange How to use resize_ffs in netbsd

I'm trying to use resize_ffs with netbsd to increase the size of my partition. I have NetBSD running in a virtual machine, and have expanded the size of the disk, and now wish to grow the partition to match.

The man_page for the tool is here

https://netbsd.gw.com/cgi-bin/man-cgi?resize_ffs+8+NetBSD-7.0

I am trying to grow a 300mb partition to 1gb.

The tool manpage says that specifiying a size is not mandatory, and that if it is not specified it will grow to use available space (ideal behaviour), however this results in an error saying newsize not known.

I have used various online tools to try and calculate the disk geomtery, but no matter what I try when I pass a number to -s, I get the error 'bad magic number'.

I have been unable to find example of using this tool online.

What is the correct way to use resize_ffs to grow a partition to use available disk space?


July 31, 2019

NetBSD Package System (pkgsrc) on DaemonForums xf86-input-keyboard, xf86-video-vmware, unrecoverable error
Hello everybody:

I'm still trying to work NetBSD with. Complicated OS, at least in this stage of development. I wonder "how can I use it as desktop graphical OS, if it can't be installed xf86-input-keyboard, or xf86-video-vmware, and so on?"

Theses are not packages stored in netbsd.org/.../...packages/.../i386/release/All

They are not stored under any release of NetBSD for i386 systems.

All of them must be installed from source...

But, an error arises, always, ... randrproto>1.6.0 needed

This is not an error of NetBSD, but at this time it has not been solved, and seems to be an endless error, among the next releases of NetBSD.

According documents on the net this bug is solved using xorgproto instead of randrproto, but does not solve anything, really, the bug is always present, not fixed anyway.

Does anybody have a binary package for xf86-input-keyboard, ?

A package that should be installed without thes issues?

Thank you all for your help.

P.S.: My NetBSD is 8.0 release, installed in a VMWared environment under Win.7.

July 30, 2019

Unix Stack Exchange pkgin installation problem (NetBSD)

I just installed NetBSD 7.1.1 (i386) on my old laptop.

During the installation, I could not install pgkin (I don't know why), so I skipped it and now I have a NetBSD 7.1.1 installed on my laptop without pkgin.

My problem is "How to install pkgin on NetBSD (i386) ?"

I found this (Click) tutorial and I followed it:

I tried :

#export PKG_PATH="http://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/6.0_BETA_current/All/"
# pkg_add -v pkgin

And I got :

pkg_add: Can't process ftp://ftp.netbsd.org:80/pub/pkgsrc/packages/NetBSD/amd64/6.0_BETA_current/All/%0d/pkgin*: Not Found
pkg_add: no pkg found for 'pkgin',sorry.
pkg_add: 1 package addition failed

I know this is a wrong command because this ftp address is for amd64 while my laptop and this NetBSD is i386. (I can't find the correct command for i386 )

I also followed instructions of pkgin.net (Click), and I did

git clone https://github.com/NetBSDfr/pkgin.git

on another computer and copied the output (which is a folder name pkgin) to my NetBSD (my NetBSD doesn't have 'git' command)

and then I did :

./configure --prefix=/usr/pkg --with-libraries=/usr/pkg/lib --with-includes=/usr/pkg/include

and then :

make

but I got :

#   compile  pkgin/summary.o
gcc -O2    -std=gnu99    -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wno-sign-compare  -Wno-traditional  -Wa,--fatal-warnings -Wreturn-type -Wswitch -Wshadow -Werror    -DPKGIN_VERSION=\""0.9.4 for NetBSD-7.1.1 i386"\" -DNETBSD  -g -DLOCALBASE=\"/usr/local\"           -DPKG_SYSCONFDIR=\"/usr/local/etc\"         -DPKG_DBDIR="\"/var/db/pkg\""           -DDEF_LOG_DIR="\"/var/db/pkg\""         -DPKGIN_DB=\"/var/db/pkgin\"            -DPKGTOOLS=\"/usr/local/sbin\" -DHAVE_CONFIG_H -D_LARGEFILE_SOURCE -D_LARGE_FILES -DCHECK_MACHINE_ARCH=\"i386\" -Iexternal -I. -I/usr/local/include  -c    summary.c
*** Error code 1

Stop.
make: stopped in /root/pkgin

I think this error occurs because of the dependencies. (which is mentioned in pkgin.net) but still, don't know how to install those dependencies.

EDIT: I found "http://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/i386/7.1.1/All/" but it still says

no pkg fond for 'pkgin', sorry

SOLVED:

** I solved the problem by writing 7.1 instead of 7.1.1**


July 29, 2019

NetBSD General on DaemonForums Fighting with NetBSD installig packages
There is a video explaining how to install NetBSD 8.0. I followed that video, and theres is something I couldn't find in docs about NetBSD. Installing Bash and using pkgin inside it enables to install packages that in other way can't be installed.

In turn, when I tried to install xf86-input-vmware, xf86-input-keyboard and xf86-video-vmware... these packages are not in the repository at all.

Looking for the net I found theses packages in an ftp site of SmartOS, that uses NetBSD packages.

I downloaded these packages, I have installed video-vmware and input-vmmouse using pkg_add -f program_name.tgz.


The package xf86-input-keyboard gives an error that "keyring" not found, and can't be installed.

The question is, why, if the video shows how install those packages directly by using pkgin install program_name, those packages don't exist anymore in NetBSD repositories.

Using pkgsrc and make install clean gives an unrecoverable error about randrproto>1.6.0 is needed.

I hope NetBSD will update repositories, because it is very difficult to work this OS with.

Does anybody I help with this?

Thanks
Unix Stack Exchange How to install directly from a package *.tgz file in NetBSD, OpenBSD, or FreeBSD

Is there any way to install software from the *.tgz file that is its package, in NetBSD? Or indeed in operating systems with similar package managers such as OpenBSD or FreeBSD?

For example, I can install the nano editor on NetBSD using this command:

pkgin nano

(I could do the same with a similar pkg install nano command on FreeBSD.)

What if I download the package file directly from the operating system's package repository, which would be a URL like http://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/i386/7.1/All/nano-2.8.7.tgz?

Having obtained the package file from the repository by hand like this, is there any way to now install nano directly from it? How do I do that?


July 24, 2019

Unix Stack Exchange How to compile fIcy for BSD?

I'm trying to compile fIcy (https://gitlab.com/wavexx/fIcy) for NetBSD/FreeBSD.

When I'm executing the make command nothing happens. Even no error message.

The same source package compiles without problems with Debian 10.

Is the Makefile even compatible with BSD?

https://gitlab.com/wavexx/fIcy/blob/master/Makefile

The commands I used so far on FreeBSD 12:

pkg install gcc
wget https://gitlab.com/wavexx/fIcy/-/archive/master/fIcy-master.tar.gz
tar xfvz fIcy-master.tar.gz
cd fIcy-master
make

type make
make is /usr/bin/make

July 13, 2019

Jeremy C. Reed 2019-July-13 pfSense Essentials Book Writing

This week I received my printed proof from the printer and enabled it to be printed. It is now for sale at Amazon and Barnes and Noble,

I set an alarm to work on it very early a few days a week and it took me a few years. (I am blessed to only commute a few times a year, so I make sure that I don't waste that gifted time.)

This book was written using Docbook using NetBSD and vi. The print-ready book was generated with Dblatex version 0.3.10 with a custom stylesheet, pdfTeX 3.14159265-2.6-1.40.19 (Web2C 2018), and the TeX document production system installed via Tex Live and Pkgsrc. Several scripts and templates were created to help have a consistent document.

The book work was managed using the Subversion version control software. I carefully outlined my steps in utilizing the useful interfaces and identified every web and console interface. The basic writing process included adding over 350 special comment tags in the docbook source files that identified topics to cover and for every pfSense web interface PHP script (highlighting if they were main webpages from the pfSense menu). As content was written, I updated these special comments with a current status. A periodic script checked the docbook files and the generated book and reported on writing progress and current needs.

During this writing, nearly every interface was tested. In addition, code and configurations were often temporarily customized to simulate various pfSense behaviors and system situations. Most of the pfSense interface and low-level source code was studied, which helped with identifying pfSense configurations and features that didn't display in standard setups and all of its options. The software was upgraded several times and installed and ran in multiple VMs and hardware environments with many wireless and network cards, including with IPv6. In addition, third-party documentation and even source code was researched to help explain pfSense configurations and behaviors.

As part of this effort, I documented 352 bugs (some minor and some significant) and code suggestions that I found from code reading or from actual use of the system. (I need to post that.)

The first subversion commit for this book was in July 2014. It has commits in 39 different months with 656 commits total. The book's docbook source had 3789 non-printed comments and 56,193 non-blank lines of text. The generated book has over 180,000 words. My subversion logs show I have commits on 41 different Saturdays. Just re-reading with cleanup took me approximately 160 hours.


July 11, 2019

Stack Overflow configuration of tty on BSD system

For a command like this one on Linux debian-linux 4.19.0-1-amd64 #1 SMP Debian 4.19.12-1 (2018-12-22) x86_64 GNU/Linux with xfce I get :

[email protected]:~$ dbus-send --system --type=method_call --print-reply --dest
=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListActivatable  
Names

The same command on OpenBSD LeOpenBSD 6.4 GENERIC.MP#364 amd64 with xfce I get :

ktop/DBus org.freedesktop.DBus.ListActivatableNames   <

On linux, at the end of screen, we go to next line.
On BSD(OpenBSD-NetBSD), the command line continue on the same line and the first words disapear.
It's the same in xfce-terminal-emulator, xterm or in TTY (Alt-Ctrl-F3)

I try to add am in gettytab in the defaut section with no avail.
Termcap man page say :
If the display wraps around to the beginning of the next line when the cursor reaches the right margin, then it should have the am capability.
What can I do ?


July 09, 2019

NetBSD Package System (pkgsrc) on DaemonForums Zabbix Frontend Dependencies
Hi All
I used pkgsrc to install the zabbix frontend. I notice though that it automatically installs some php71 dependencies. I really wanted to use php73 though as php71 has some vulns. Is there a way to do that?

July 08, 2019

Server Fault Webserver farm with NFS share (autofs failure)

I am trying to set up the farm of webservers, consisting of the internal, external and worker servers.

  1. The actual sites content is stored on internal NFS server deep in internal network. All sites contents management is centralized.

  2. BSD-based external servers have Lighttpd doing all the HTTP/HTTPS job, serving static content. Main NFS share is auto-mounted via special path, like /net/server/export/www/site/ (via amd).

  3. Every Lighttpd have fastcgi parameters pointing to several worker servers, which have php-fpm working (for example). Different sites may require different php versions or arrangement, so www01 and www02 may serve site "A" having php-fpm over PHP 5.6 and www05 and www06 will serve site "B" having php-fpm over PHP 7.2.

  4. Every worker get requests for certain sites (one or more) with path /net/server/export/www/site and execute PHP or any other code. They also have amd (for BSD) and autofs (for Linux) working.

  5. For some sites Lighttpd may not forward fastcgi, but do proxying instead, so workers can have Apache or other web-server (even Java-based) working.

External servers are always BSD, internal servers too, but workers can be different upon actual needs.

This all work good when workers are BSD. If we are using Linux on workers - it stops working when share is automatically unmounted. When one tries to access the site he will get error 404. When I connect to server via ssh I will see no mounted share on "df -h". If I do any "ls" on /net/server/export - it is self-mounted as intended and site starts to work. On BSD-systems df show amd shares always mounted despite of 60 seconds dismount period.

I believe there is a difference between amd and autofs approach, php-fpm calls on Linux become some kind of "invisible" to autofs and do not cause auto-mount, because any other access to /net/server/ work at any time and do cause auto-mount. Also, this happens not with php-fpm only, Apache serving static content on auto-mounted NFS share behave same way.

Sorry for long description, but I tried to describe it good. The main question here - is anyone know why calls to /net/server may not cause auto-mount in autofs and how to prevent this behavior.

For lot of reasons I do not consider using static mounting, so this is not an option here. As for Linux versions - mostly it was tested on OEL 7.recent.


July 04, 2019

OS News OpenBSD is now my workstation
Why OpenBSD? Simply because it is the best tool for the job for me for my new-to-me Lenovo Thinkpad T420. Additionally, I do care about security and non-bloat in my personal operating systems (business needs can have different priorities, to be clear). I will try to detail what my reasons are for going with OpenBSD (instead of GNU/Linux, NetBSD, or FreeBSD of which I’m comfortable using without issue), challenges and frustrations I’ve encountered, and what my opinions are along the way. I’ve never managed to really get into the BSDs, as Linux has always served my needs for a UNIX-like operating system quite well. I feel like the BSDs are more pure and less messy than Linux, but is that actually true, or just my perception?

July 03, 2019

Super User Using a Console-only NetBSD VM

I am experimenting with NetBSD and seeing if I can get the Fenrir screenreader to run on it. However, I hit a snag post install; the console that I was using for the installation was working perfectly fine, however it stopped working alltogether once I completed the install. For reference, here is the line I used for virt-install:

virt-install --connect qemu:///system -n netbsd-testing \
             --ram 4096 --vcpus=8 \
             --cpu=host \
             -c /home/sektor/Downloads/boot-com.iso  \
             --os-type=netbsd --os-variant=netbsd8.0 \
             --disk=pool=devel,size=100,format=qcow2 \
             -w network=default --nographics 

When it asked me for the type of terminal I was using (this being the NetBSD install program), I accepted the default which was VT200. As I recall, I told it to use the BIOS for booting, and not any of the comm serial ports. Has anyone had any further experience with using no graphics on a Libvirt virtualized machine, and have any points as to how to get a working console?

Thanks.


June 29, 2019

NetBSD General on DaemonForums View X session of instance in VirtualBox via VNC
Does anyone have a working howto on how to attach X session on NetBSD running within VirtualBox to VNC on the host computer?

May 24, 2019

NetBSD Installation and Upgrading on DaemonForums no bootable device after installtion
After installing NetBSD 8 I have a couple problems.
1. If the USB drive with the installation image is not inserted the system will not boot.
2. Running X -configure causes a reboot.

1. Without the installation USB:
Code:

PXE-M0F: Exiting PXE ROM.
No bootable -- insert boot disk and press any key

The first time I thought I made a mistake and did something to the BIOS, but the partitions looks fine, just like it should in The Guide:
Code:

a:  0    472983    472984    FFSv2
b:  472984    476939    3985    swap
c:  0    476939    476939    NetBSD partition
d:  0    476939    476940    whole disc
e:  0    0    0    unused

I am at a bit of a loss, since as far as I know it should not be possible to set an installation medium as the boot source of an OS.

2. I do not know if this is unsupported hardware or related to #1.
Code:

DRM error in radeon_get_bios:
Unable to locate a BIOS ROM
radeon0: error: Fatal error during GPU init

I am trawlling through documrntation, but with a telephone. So I also cannot post a dmesg, although I can look through other threads where it is posted and copy it. (A little later in the day.)

March 15, 2019

Stack Overflow host netbsd 1.4 or 1.5 i386 cross-compile target macppc powerpc g3 program

For some reason, I want develop program which can work on netbsd 1.4 or 1.5 powerpc ,target cpu is power750(powerpc platform,nearly 20 years old system),but I can't find how to make this kind cross-compile enviroment vmware host:i386 netbsd 1.5 + egcs1.1.1 + binutils 2.9.1 ---> target host:macppc powerpc netbsd 1.5 + egcs 1.1.1 I download and install netbsd 1.5 vmware and download pkgsrc,when I make /usr/src/pkgsrc/cross/powerpc-netbsd,I got gcc work on i386 but not cross-gcc,why? Thank you if any help!


March 07, 2019

Amitai Schlair NYCBUG: Maintaining qmail in 2019

On Wednesday, March 6, I attended New York City BSD User Group and presented Maintaining qmail in 2019. This one pairs nicely with my recent DevOpsDays Ignite talk about why and how to Run Your @wn Email Server! That this particular “how” could be explained in 5 minutes is remarkable, if I may say so myself. In this NYCBUG talk — my first since 2014 — I show my work. It’s a real-world, open-source tale of methodically, incrementally reducing complexity in order to afford added functionality.

My abstract:

qmail 1.03 was notoriously bothersome to deploy. Twenty years later, for common use cases, I’ve finally made it pretty easy. If you want to try it out, I’ll help! (Don’t worry, it’s even easier to uninstall.) Or just listen as I share the sequence of stepwise improvements from then to now — including pkgsrc packaging, new code, and testing on lots of platforms — as well as the reasons I keep finding this project worthwhile.

Here’s the video:


February 23, 2019

Stack Overflow How to perform a 308 open redirect with php and apache?

I want to perform an open redirect so,

https://ytrezq.sdfeu.org/flashredirect/?endpoint=https:://www.google.fr/

would redirect to https://www.google.fr.
Here’s /index.cgi which of course has exec permissions.

#!/usr/pkg/bin/php
<?php
header("Location: ".$_GET["endpoint"], true, 307);
?>

and Here’s /flashredirect/.htaccess

Options FollowSymLinks
Options +ExecCGI
AddHandler cgi-script .cgi .pl
RewriteEngine On
RewriteBase /
FallbackResource /index.cgi

Obviously, there’s an error somewhere but where ? Also accessing error logs is payfull on sdf-eu.org so I can’t know the problem.


January 25, 2019

Amitai Schlair DevOpsDays NYC: Run Your @wn Email Server!

In late January, I was at DevOpsDays NYC in midtown Manhattan to present Run Your @wn Email Server!

My abstract:

When we’re responsible for production, it can be hard to find room to learn. That’s why I run my own email server. It’s still “production” — if it stays down, that’s pretty bad — but I own all the decisions, take more risks, and have learned lots. And so can you! Come see why and how to get started.

With one command, install famously secure email software. A couple more and it’s running. A few more and it’s encrypted. Twiddle your DNS, watch the mail start coming in, and start feeling responsible for a production service in a way that web hosting can’t match.


January 07, 2019

Amitai Schlair 2018Q4 qmail updates in pkgsrc

Happy 2019! Another three months, another stable branch for pkgsrc, the practical cross-platform Unix package manager. I’ve shipped quite a few improvements for qmail users in our 2018Q4 release. In three sentences:

  1. qmail-run gains TLS, SPF, IPv6, SMTP recipient checks, and many other sensible defaults.
  2. Most qmail-related packages — including the new ones used by qmail-run — are available on most pkgsrc platforms.
  3. rc.d-boot starts rc.conf-enabled pkgsrc services at boot time on many platforms.

In one:

It’s probably easy for you to run qmail now.

On this basis, at my DevOpsDays NYC talk in a few weeks, I’ll be recommending that everyone try it.

Try it

Here’s a demo on CentOS 7, using binary packages:

The main command I ran:

$ sudo env PKG_RCD_SCRIPTS=yes pkgin -y install qmail-run rc.d-boot

Here’s another demo on Debian 9, building from source packages:

The commands I ran:

$ cd ...pkgsrc/mail/qmail-run && make PKG_RCD_SCRIPTS=yes install
$ cd ../../pkgtools/rc.d-boot && make PKG_RCD_SCRIPTS=yes install

These improvements were made possible by acceptutils, my redesigned TLS and SMTP AUTH implementation that obviates the need for several large and conflicting patches. Further improvements are expected.

Here’s the full changelog for qmail as packaged in pkgsrc-2018Q4.

Removed

Updated

Added


September 15, 2018

Amitai Schlair Coding Tour Summer 2018: Conclusion

After my fourth and final tour stop, we decamped to Mallorca for a week. With no upcoming workshops to polish and no upcoming plans to finalize, the laptop stayed home. Just each other, a variety of beaches, and the annual Les Festes del Rei En Jaume that Bekki and I last saw two years ago on our honeymoon. The parade was perhaps a bit much for Taavi.

Looking away

The just-released episode 99 of Agile for Humans includes some reflections (starting around 50 minutes in) from partway through my coding tour. As our summer in Germany draws to a close, I’d like to reflect on the tour as a whole.

Annual training

I’ve made a habit of setting aside time, attention, and money each year for focused learning. My most recent trainings, all formative and memorable:

I hoped Schleier, Coding Tour would fit the bill for 2018. It has.

Geek joy

At the outset, I was asked how I’d know whether the tour had gone well. My response: “It’s a success if I get to meet a bunch of people in a bunch of places and we have fun programming together.”

I got to program with a bunch of people in a bunch of places. We had fun doing it. Success!

New technologies

My first tour stop offered such an ecumenical mix of languages, tools, and techniques that I began writing down each new technology I encountered. I’m glad I started at the beginning. Even so, this list of things that were new or mostly new to me is probably incomplete:

In the moment, learning new technologies was a source of geek joy. In the aggregate, it’s professionally useful. I think the weight clients tend to place on consultants needing to be expert in their tech stack is dangerously misplaced, but it doesn’t matter what I think if they won’t bring me in. Any chance for me to broaden my tech background is a chance for a future client to take advantage of all the other reasons I can be valuable to them.

Insights

As Schmonz’s Theorem predicts, code-touring is both similar to and different from consulting.

When consulting, I expect most of my learning to be meta: the second loop (at least) of double-loop learning. When touring, I became reacquainted with the simple joys of the first loop, spending all day learning new things to be able to do. It often felt like play.

When consulting, I initially find myself being listened to in a peculiar way, my words being heard and measured carefully for evidence of my real intentions. My first tasks are to demonstrate that I can be trusted and that I can be useful, not necessarily in that (or any) order. Accomplishing this as a programmer on tour felt easier than usual.

When I’m consulting, not everyone I encounter wants me there. Some offer time and attention because they feel obligated. On this tour, even though some folks were surprised to find out their employer wasn’t paying me anything, I sensed people were sharing their time and attention with me out of curiosity and generosity. I believe I succeeded in making myself trusted and useful to each of them, and the conversation videos and written testimonials help me hold the belief.

Professional development

With so much practice designing and facilitating group activities, so much information-rich feedback from participants, and so many chances to try again soon, I’ve leveled up as a facilitator. I was comfortable with my skills, abilities, and material before; I’m even more comfortable now. In my tour’s final public meetup, I facilitated one of my legacy code exercises for three simultaneous mobs. It went pretty well — in large part because of the participants, but also because of my continually developing skill at designing and facilitating learning experiences.

As a consultant, it’s a basic survival skill to quickly orient myself in new problem spaces. As a coach, my superpower might be that I help others quickly orient themselves in their problem spaces. Visiting many teams at many companies, I got lots of practice at both. These areas of strength for me are now stronger, the better with which to serve my next clients.

On several occasions I asked mobs not to bother explaining the current context to me before starting the timer. My hypothesis was, all the context I’d need would reveal itself through doing the work and asking a question or two along the way. (One basis among many for this hypothesis: what happened when I showed up late to one of Lennart Fridén’s sessions at this spring’s Mob Programming Conference and everyone else had already read the manual for our CPU.) I think there was one scenario where this didn’t work extremely well, but my memory’s fuzzy — have I mentioned meeting a whole bunch of people at a whole bunch of workplaces, meetups, and conferences? — so I’ll have to report the details when I rediscover it.

You can do this too, and I can help

When designing my tour, I sought advice from several people who’d gone on one. (Along the way I met several more, including Ivan Sanchez at SPA in London and Daniel Temme at SoCraTes in Soltau.)

If you’re wondering whether a coding tour is something you want to do, or how to make it happen, get in touch. I’m happy to listen and offer my suggestions.

What’s next for me, and you can help

Like what I’m doing? Want more of it in your workplace?

I offer short, targeted engagements in the New York metro area — coaching, consulting, and training — co-designed with you to meet your organization’s needs.

More at latentagility.com.

Gratitude

Yes, lots.

It’s been a splendid set of privileges to have the free time to go on tour, to have organizations in several countries interested to have me code with them, and to meet so many people who care about what I care about when humans develop software together.

Five years ago I was discovering the existence of a set of communities of shared values in software development and my need to feel connected to them. Today I’m surer than ever that I’ve needed this connection and that I’ve found it.

Thanks to the people who hosted me for a week at their employer: Patrick Drechsler at MATHEMA/Redheads in Erlangen, Alex Schladebeck at BREDEX in Braunschweig, Barney Dellar at Canon Medical Research in Edinburgh, and Thorsten Brunzendorf at codecentric in Nürnberg and München. And thanks to these companies for being willing to take a chance on bringing in an itinerant programmer for a visit.

Thanks and apologies in equal measure to Richard Groß, who did all the legwork to have me visit MaibornWolff in Frankfurt, only to have me cancel at just about the last minute. At least we got to enjoy each other’s company at Agile Coach Camp Germany and SoCraTes (the only two people to attend both!).

Thanks to David Heath at the UK’s Government Digital Service for inviting me to join them on extremely short notice when I had a free day in London, and to Olaf Lewitz for making the connection.

Thanks to the meetups and conferences where I was invited to present: Mallorca Software Craft, SPA Software in Practice, pkgsrcCon, Hackerkegeln, JUG Ostfalen, Lean Agile Edinburgh, NEBytes, and Munich Software Craft. And thanks to Agile Coach Camp Germany and SoCraTes for the open spaces I did my part to fill.

Thanks to Marc Burgauer, Jens Schauder, and Jutta Eckstein for making time to join me for a meal. Thanks to Zeb Ford-Reitz, Barney Dellar, and their respective spice for inviting me into their respective homes for dinner.

Thanks to J.B. Rainsberger for simple, actionable advice on making it easy for European companies to reimburse my expenses, and more broadly on the logistics of going on European consulting-and-speaking tours when one is from elsewhere. (BTW, his next tour begins soon.)

Thanks all over again to everyone who helped me design and plan the tour, most notably Dr. Sal Freudenberg, Llewellyn Falco, and Nicole Rauch.

Thanks to Woody Zuill, Bryan Beecham, and Tim Bourguignon for that serendipitous conversation in the park in London. Thanks to Tim for having been there in the park with me. (No thanks to Woody for waiting till we’d left London before arriving. At least David Heath and GDS got to see him. Hmph.)

Thanks to Lisi Hocke for making my wish a reality: that her testing tour and my coding tour would intersect. As a developer, I have so much to learn about testing and so few chances to learn from the best. She made it happen. A perfect ending for my tour.

Thanks to Ryan Ripley for having me on Agile for Humans a couple more times as the tour progressed. I can’t say enough about what Ryan and his show have done for me, so this’ll have to be enough.

Thanks to everyone else who helped draw special attention to my tour when I was seeking companies to visit, most notably Kent Beck. It really did help.

Another reason companies cited for inviting me: my micropodcast, Agile in 3 Minutes. Thanks to Johanna Rothman, Andrea Goulet, Lanette Creamer, Alex Harms, and Jessica Kerr for your wonderful guest episodes. You’ve done me and our listeners a kindness. I trust it will come back to you.

Thank you to my family for supporting my attempts at growth, especially when I so clearly need it.

Finally, thanks to all of you for following along and for helping me find the kind of consulting work I’m best at, close to home in New York. You can count on me continuing to learn things and continuing to share them with you.

FIN



March 17, 2018

Hubert Feyrer The adventure of rebuilding g4u from source
I was asked by a long-time g4u user on help with rebuilding g4u from sources. After pointing at the instructions on the homepage, we figured out that a few lose odds and ends didin't match. After bouncing some advices back and forth, I ventured into the frabjous joy of starting a rebuild from scratch, and quick enough ran into some problems, too.

Usually I cross-compile g4u from Mac OS X, but for the fun of it I did it on NetBSD (7.0-stable branch, amd64 architecture in VMware Fusion) this time. After waiting forever on the CVS checkout, I found that empty directories were not removed - that's what you get if you have -P in your ~/.cvsrc file.

I already had the hint that the "g4u-build" script needed a change to have "G4U_BUILD_KERNEL=true".

From there, things went almost smooth: building indicated a few files that popped up "variable may be used uninitialized" errors, and which -- thanks to -Werror -- bombed out the build. Fixing was easy, and I have no idea why that built for me on the release. I have sent a patch with the required changes to the g4u-help mailing list. (After fixing that I apparently got unsubscribed from my own support mailing list - thank you very much, Sourceforge ;)).

After those little hassles, the build worked fine, and gave me the floppy disk and ISO images that I expected:

>       ls -l `pwd`/g4u*fs
>       -rw-r--r--  2 feyrer  staff  1474560 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u1.fs
>       -rw-r--r--  2 feyrer  staff  1474560 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u2.fs
>       -rw-r--r--  2 feyrer  staff  1474560 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u3.fs
>       -rw-r--r--  2 feyrer  staff  1474560 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u4.fs
>       ls -l `pwd`/g4u.iso
>       -rw-r--r--  2 feyrer  staff  6567936 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u.iso
>       ls -l `pwd`/g4u-kernel.gz
>       -rw-r?r--  1 feyrer  staff  6035680 Mar 17 19:27 /home/feyrer/work/NetBSD/cvs/src-g4u.v3-deOliviera/src/distrib/i386/g4u/g4u-kernel.gz 
Next steps are to confirm the above changes as working from my faithful tester, and then look into how to merge this into the build instructions .

January 12, 2018

Super User What is the default File System in NetBSD? What are it's benefits and shortcommings?

I spent some time looking through the documentation, but honestly, I have not found any good answer.

I understand NetBSD supports many FS types as USER SPACE, but I would like to know what is the default FS created by the installer, and the one which I could boot from.


January 04, 2018

Hubert Feyrer NetBSD 7.1.1 released
On December 22nd, NetBSD 7.1.1 was released as premature christmas present, see the release annoucement.

NetBSD 7.1.1 is the first update with security and critical fixes for the NetBSD 7.1 branch. Those include a number of fixes for security advisories, kernel and userland.

Hubert Feyrer New year, new security advisories!
So things have become a bit silent here, which is due to reallife - my apologies. Still, I'd like to wish everyone following this here a Happy New Year 2018! And with this, a few new security advisories have been published:
Hubert Feyrer 34C3 talk: Are all BSDs created equally?
I haven't seen this mentioned on the NetBSD mailing lists, and this may be of interest to some - there was a talk about security bugs in the various BSDs at the 34th Chaos Communication Congress:

In summary, many reasons for bugs are shown in many areas of the kernel (system calls, file systems, network stack, compat layer, ...), and what has happened after they were made known to the projects.

As a hint, NetBSD still has a number of Security Advisories to publish, it seems. Anyone wants to help out the security team? :-)


June 22, 2017

Server Fault How to log ssh client connection/command?

I would like to know how i could log SSH command lines a user is using on a server. For exemple, if the user Alex on my server is doing the following set of commands :

$ cd /tmp
$ touch myfile
$ ssh [email protected]
$ ssh [email protected]
$ vim anotherfile
$ ssh [email protected]

I would like to log the ssh commands used on the server in a file which looks like :

[2014-07-25 10:10:10] Alex : ssh [email protected]
[2014-07-25 10:18:20] Alex : ssh [email protected]
[2014-07-25 11:15:10] Alex : ssh [email protected]

I don't care what he did during his ssh session, i just want to know WHEN and TO WHERE he made a connection to another server.

The user is not using bash and i would like to avoid manipulating .bash_history anyway as the user can modify it.

Any clue on this ?

Thank you :)

edit : to be more specific :

a user connects to a server A and then connects from the server A to server B. I want to track down to which server he connects through ssh from server A.


June 08, 2017

Hubert Feyrer g4u 2.6 released
After a five-year period for beta-testing and updating, I have finally released g4u 2.6. With its origins in 1999, I'd like to say: Happy 18th Birthday, g4u!

About g4u: g4u ("ghosting for unix") is a NetBSD-based bootfloppy/CD-ROM that allows easy cloning of PC harddisks to deploy a common setup on a number of PCs using FTP. The floppy/CD offers two functions. The first is to upload the compressed image of a local harddisk to a FTP server, the other is to restore that image via FTP, uncompress it and write it back to disk. Network configuration is fetched via DHCP. As the harddisk is processed as an image, any filesystem and operating system can be deployed using g4u. Easy cloning of local disks as well as partitions is also supported.

The past: When I started g4u, I had the task to install a number of lab machines with a dual-boot of Windows NT and NetBSD. The hype was about Microsoft's "Zero Administration Kit" (ZAK) then, but that did barely work for the Windows part - file transfers were slow, depended on the clients' hardware a lot (requiring fiddling with MS DOS network driver disks), and on the ZAK server the files for installing happened do disappear for no good reason every now and then. Not working well, and leaving out NetBSD (and everything elase), I created g4u. This gave me the (relative) pain of getting things working once, but with the option to easily add network drivers as they appeared in NetBSD (and oh they did!), plus allowed me to install any operating system.

The present: We've used g4u successfully in our labs then, booting from CDROM. I also got many donations from public and private instituations plus comanies from many sectors, indicating that g4u does make a difference.

In the mean time, the world has changed, and CDROMs aren't used that much any more. Network boot and USB sticks are today's devices of choice, cloning of a full disk without knowing its structure has both advantages but also disadvantages, and g4u's user interface is still command-line based with not much space for automation. For storage, FTP servers are nice and fast, but alternatives like SSH/SFTP, NFS, iSCSI and SMB for remote storage plus local storage (back to fun with filesystems, anyone? avoiding this was why g4u was created in the first place!) should be considered these days. Further aspects include integrity (checksums), confidentiality (encryption). This leaves a number of open points to address either by future releases, or by other products.

The future: At this point, my time budget for g4u is very limited. I welcome people to contribute to g4u - g4u is Open Source for a reason. Feel free to get back to me for any changes that you want to contribute!

The changes: Major changes in g4u 2.6 include:

The software: Please see the g4u homepage's download section on how to get and use g4u.

Enjoy!


February 23, 2017

Julio Merino Easy pkgsrc on macOS with pkg_comp 2.0

This is a tutorial to guide you through the shiny new pkg_comp 2.0 on macOS using the macOS-specific self-installer.

Goals: to use pkg_comp 2.0 to build a binary repository of all the packages you are interested in; to keep the repository fresh on a daily basis; and to use that repository with pkgin to maintain your macOS system up-to-date and secure.

This tutorial is specifically targeted at macOS and relies on the macOS-specific self-installer package. For a more generic tutorial that uses the pkg_comp-cron package in pkgsrc, see Keeping NetBSD up-to-date with pkg_comp 2.0.

Getting started

Start by downloading and installing OSXFUSE 3 and then download the standalone macOS installer package for pkg_comp. To find the right file, navigate to the releases page on GitHub, pick the most recent release, and download the file with a name of the form pkg_comp-<version>-macos.pkg.

Then double-click on the file you downloaded and follow the installation instructions. You will be asked for your administrator password because the installer has to place files under /usr/local/; note that pkg_comp requires root privileges anyway to run (because it uses chroot(8) internally), so you will have to grant permission at some point or another.

The installer modifies the default PATH (by creating /etc/paths.d/pkg_comp) to include pkg_comp’s own installation directory and pkgsrc’s installation prefix. Restart your shell sessions to make this change effective, or update your own shell startup scripts accordingly if you don’t use the standard ones.

Lastly, make sure to have Xcode installed in the standard /Applications/Xcode.app location and that all components required to build command-line apps are available. Tip: try running cc from the command line and seeing if it prints its usage message.

Adjusting the configuration

The macOS flavor of pkg_comp is configured with an installation prefix of /usr/local/, which means that the executable is located in /usr/local/sbin/pkg_comp and the configuration files are in /usr/local/etc/pkg_comp/. This is intentional to keep the pkg_comp installation separate from your pkgsrc installation so that it can run no matter what state your pkgsrc installation is in.

The configuration files are as follows:

Note that these configuration files use the /var/pkg_comp/ directory as the dumping ground for: the pkgsrc tree, the downloaded distribution files, and the built binary packages. We will see references to this location later on.

The cron job

The installer configures a cron job that runs as root to invoke pkg_comp daily. The goal of this cron job is to keep your local packages repository up-to-date so that you can do binary upgrades at any time. You can edit the cron job configuration interactively by running sudo crontab -e.

This cron job won’t have an effect until you have populated the list.txt file as described above, so it’s safe to let it enabled until you have configured pkg_comp.

If you want to disable the periodic builds, just remove the pkg_comp entry from the crontab.

On slow machines, or if you are building a lot of packages, you may want to consider decreasing the build frequency from @daily to @weekly.

Sample configuration

Here is what the configuration looks like on my Mac Mini as dumped by the config subcommand. Use this output to get an idea of what to expect. I’ll be using the values shown here in the rest of the tutorial:

$ pkg_comp config
AUTO_PACKAGES = autoconf automake bash colordiff dash emacs24-nox11 emacs25-nox11 fuse-bindfs fuse-sshfs fuse-unionfs gdb git-base git-docs glib2 gmake gnuls libtool-base lua52 mercurial mozilla-rootcerts mysql56-server pdksh pkg_developer pkgconf pkgin ruby-jekyll ruby-jekyll-archives ruby-jekyll-paginate scmcvs smartmontools sqlite3 tmux vim
CVS_ROOT = :ext:[email protected]:/cvsroot
CVS_TAG is undefined
DISTDIR = /var/pkg_comp/distfiles
EXTRA_MKCONF = /usr/local/etc/pkg_comp/extra.mk.conf
FETCH_VCS = git
GIT_BRANCH = trunk
GIT_URL = https://github.com/jsonn/pkgsrc.git
LOCALBASE = /opt/pkg
NJOBS = 4
PACKAGES = /var/pkg_comp/packages
PBULK_PACKAGES = /var/pkg_comp/pbulk-packages
PKG_DBDIR = /opt/pkg/libdata/pkgdb
PKGSRCDIR = /var/pkg_comp/pkgsrc
SANDBOX_CONFFILE = /usr/local/etc/pkg_comp/sandbox.conf
SYSCONFDIR = /opt/pkg/etc
UPDATE_SOURCES = true
VARBASE = /opt/pkg/var

DARWIN_NATIVE_WITH_XCODE = true
SANDBOX_ROOT = /var/pkg_comp/sandbox
SANDBOX_TYPE = darwin-native

Building your own packages by hand

Now that you are fully installed and configured, you’ll build some stuff by hand to ensure the setup works before the cron job comes in.

The simplest usage form, which involves full automation and assumes you have listed at least one package in list.txt, is something like this:

$ sudo pkg_comp auto

This trivially-looking command will:

  1. clone or update your copy of pkgsrc;
  2. create the sandbox;
  3. bootstrap pkgsrc and pbulk;
  4. use pbulk to build the given packages; and
  5. destroy the sandbox.

After a successful invocation, you’ll be left with a collection of packages in the /var/pkg_comp/packages/ directory.

If you’d like to restrict the set of packages to build during a manually-triggered build, provide those as arguments to auto. This will override the contents of AUTO_PACKAGES (which was derived from your list.txt file).

But what if you wanted to invoke all stages separately, bypassing auto? The command above would be equivalent to:

$ sudo pkg_comp fetch
$ sudo pkg_comp sandbox-create
$ sudo pkg_comp bootstrap
$ sudo pkg_comp build <package names here>
$ sudo pkg_comp sandbox-destroy

Go ahead and play with these. You can also use the sandbox-shell command to interactively enter the sandbox. See pkg_comp(8) for more details.

Lastly note that the root user will receive email messages if the periodic pkg_comp cron job fails, but only if it fails. That said, you can find the full logs for all builds, successful or not, under /var/pkg_comp/log/.

Installing the resulting packages

Now that you have built your first set of packages, you will want to install them. This is easy on macOS because you did not use pkgsrc itself to install pkg_comp.

First, unpack the pkgsrc installation. You only have to do this once:

$ cd /
$ sudo tar xzvpf /var/pkg_comp/packages/bootstrap.tgz

That’s it. You can now install any packages you like:

$ PKG_PATH=file:///var/pkg_comp/packages/All sudo pkg_add pkgin <other package names>

The command above assume you have restarted your shell to pick up the correct path to the pkgsrc installation. If the call to pkg_add fails because of a missing binary, try restarting your shell or explicitly running the binary as /opt/pkg/sbin/pkg_add.

Keeping your system up-to-date

Thanks to the cron job that builds your packages, your local repository under /var/pkg_comp/packages/ will always be up-to-date; you can use that to quickly upgrade your system with minimal downtime.

Assuming you are going to use pkgtools/pkgin as recommended above (and why not?), configure your local repository:

$ sudo /bin/sh -c "echo file:///var/pkg_comp/packages/All >>/opt/pkg/etc/pkgin/repositories.conf"

And, from now on, all it takes to upgrade your system is:

$ sudo pkgin update
$ sudo pkgin upgrade

Enjoy!


February 18, 2017

Julio Merino Keeping NetBSD up-to-date with pkg_comp 2.0

This is a tutorial to guide you through the shiny new pkg_comp 2.0 on NetBSD.

Goals: to use pkg_comp 2.0 to build a binary repository of all the packages you are interested in; to keep the repository fresh on a daily basis; and to use that repository with pkgin to maintain your NetBSD system up-to-date and secure.

This tutorial is specifically targeted at NetBSD but should work on other platforms with some small changes. Expect, at the very least, a macOS-specific tutorial as soon as I create a pkg_comp standalone installer for that platform.

Getting started

First install the sysutils/sysbuild-user package and trigger a full build of NetBSD so that you get usable release sets for pkg_comp. See sysbuild(1) and pkg_info sysbuild-user for details on how to do so. Alternatively, download release sets from the FTP site and later tell pkg_comp where they are.

Then install the pkgtools/pkg_comp-cron package. The rest of this tutorial assumes you have done so.

Adjusting the configuration

To use pkg_comp for periodic builds, you’ll need to do some minimal edits to the default configuration files. The files can be found directly under /var/pkg_comp/, which is pkg_comp-cron’s “home”:

Lastly, review root’s crontab to ensure the job specification for pkg_comp is sane. On slow machines, or if you are building many packages, you will probably want to decrease the build frequency from @daily to @weekly.

Sample configuration

Here is what the configuration looks like on my NetBSD development machine as dumped by the config subcommand. Use this output to get an idea of what to expect. I’ll be using the values shown here in the rest of the tutorial:

# pkg_comp -c /var/pkg_comp/pkg_comp.conf config
AUTO_PACKAGES = autoconf automake bash colordiff dash emacs-nox11 git-base git-docs gmake gnuls lua52 mozilla-rootcerts pdksh pkg_comp-cron pkg_developer pkgin sqlite3 sudo sysbuild sysbuild-user sysupgrade tmux vim zsh
CVS_ROOT = :ext:[email protected]:/cvsroot
CVS_TAG is undefined
DISTDIR = /var/pkg_comp/distfiles
EXTRA_MKCONF = /var/pkg_comp/extra.mk.conf
FETCH_VCS = cvs
GIT_BRANCH = trunk
GIT_URL = https://github.com/jsonn/pkgsrc.git
LOCALBASE = /usr/pkg
NJOBS = 2
PACKAGES = /var/pkg_comp/packages
PBULK_PACKAGES = /var/pkg_comp/pbulk-packages
PKG_DBDIR = /usr/pkg/libdata/pkgdb
PKGSRCDIR = /var/pkg_comp/pkgsrc
SANDBOX_CONFFILE = /var/pkg_comp/sandbox.conf
SYSCONFDIR = /etc
UPDATE_SOURCES = true
VARBASE = /var

NETBSD_NATIVE_RELEASEDIR = /home/sysbuild/release/amd64
NETBSD_RELEASE_RELEASEDIR = /home/sysbuild/release/amd64
NETBSD_RELEASE_SETS is undefined
SANDBOX_ROOT = /var/pkg_comp/sandbox
SANDBOX_TYPE = netbsd-release

Building your own packages by hand

Now that you are fully installed and configured, you’ll build some stuff by hand to ensure the setup works before the cron job comes in.

The simplest usage form, which involves full automation, is something like this:

# pkg_comp -c /var/pkg_comp/pkg_comp.conf auto

This trivially-looking command will:

  1. checkout or update your copy of pkgsrc;
  2. create the sandbox;
  3. bootstrap pkgsrc and pbulk;
  4. use pbulk to build the given packages; and
  5. destroy the sandbox.

After a successful invocation, you’ll be left with a collection of packages in the directory you set in PACKAGES, which in the default pkg_comp-cron installation is /var/pkg_comp/packages/.

If you’d like to restrict the set of packages to build during a manually-triggered build, provide those as arguments to auto. This will override the contents of AUTO_PACKAGES (which was derived from your list.txt file).

But what if you wanted to invoke all stages separately, bypassing auto? The command above would be equivalent to:

# pkg_comp -c /var/pkg_comp/pkg_comp.conf fetch
# pkg_comp -c /var/pkg_comp/pkg_comp.conf sandbox-create
# pkg_comp -c /var/pkg_comp/pkg_comp.conf bootstrap
# pkg_comp -c /var/pkg_comp/pkg_comp.conf build <package names here>
# pkg_comp -c /var/pkg_comp/pkg_comp.conf sandbox-destroy

Go ahead and play with these. You can also use the sandbox-shell command to interactively enter the sandbox. See pkg_comp(8) for more details.

Lastly note that the root user will receive email messages if the periodic pkg_comp cron job fails, but only if it fails. That said, you can find the full logs for all builds, successful or not, under /var/pkg_comp/log/.

Installing the resulting packages

Now that you have built your first set of packages, you will want to install them. On NetBSD, the default pkg_comp-cron configuration produces a set of packages for /usr/pkg so you have to wipe your existing packages first to avoid build mismatches.

WARNING: Yes, you really have to wipe your packages. pkg_comp currently does not recognize the package tools that ship with the NetBSD base system (i.e. it bootstraps pkgsrc unconditionally, including bmake), which means that the newly-built packages won’t be compatible with the ones you already have. Avoid any trouble by starting afresh.

To clean your system, do something like this:

# ... ensure your login shell lives in /bin! ...
# pkg_delete -r -R "*"
# mv /usr/pkg/etc /root/etc.old  # Backup any modified files.
# rm -rf /usr/pkg /var/db/pkg*

Now, rebootstrap pkgsrc and reinstall any packages you previously had:

# cd /
# tar xzvpf /var/pkg_comp/packages/bootstrap.tgz
# echo "pkg_admin=/usr/pkg/sbin/pkg_admin" >>/etc/pkgpath.conf
# echo "pkg_info=/usr/pkg/sbin/pkg_info" >>/etc/pkgpath.conf
# export PATH=/usr/pkg/bin:/usr/pkg/sbin:${PATH}
# export PKG_PATH=file:///var/pkg_comp/packages/All
# pkg_add pkgin pkg_comp-cron <other package names>

Finally, reconfigure any packages where you had have previously made custom edits. Use the backup in /root/etc.old to properly update the corresponding files in /etc. I doubt you made a ton of edits so this should be easy.

IMPORTANT: Note that the last command in this example includes pkgin and pkg_comp-cron. You should install these first to ensure you can continue with the next steps in this tutorial.

Keeping your system up-to-date

If you paid attention when you installed the pkg_comp-cron package, you should have noticed that this configured a cron job to run pkg_comp daily. This means that your packages repository under /var/pkg_comp/packages/ will always be up-to-date so you can use that to quickly upgrade your system with minimal downtime.

Assuming you are going to use pkgtools/pkgin (and why not?), configure your local repository:

# echo 'file:///var/pkg_comp/packages/All' >>/etc/pkgin/repositories.conf

And, from now on, all it takes to upgrade your system is:

# pkgin update
# pkgin upgrade

Enjoy!


February 17, 2017

Julio Merino Introducing pkg_comp 2.0 (and sandboxctl 1.0)

After many (many) years in the making, pkg_comp 2.0 and its companion sandboxctl 1.0 are finally here!

Read below for more details on this launch. I will publish detailed step-by-step tutorials on setting up periodic package rebuilds in separate posts.

What are these tools?

pkg_comp is an automation tool to build pkgsrc binary packages inside a chroot-based sandbox. The main goal is to fully automate the process and to produce clean and reproducible packages. A secondary goal is to support building binary packages for a different system than the one doing the builds: e.g. building packages for NetBSD/i386 6.0 from a NetBSD/amd64 7.0 host.

The highlights of pkg_comp 2.0, compared to the 1.x series, are: multi-platform support, including NetBSD, FreeBSD, Linux, and macOS; use of pbulk for efficient builds; management of the pkgsrc tree itself via CVS or Git; and a more robust and modern codebase.

sandboxctl is an automation tool to create and manage chroot-based sandboxes on a variety of operating systems. sandboxctl is the backing tool behind pk_comp. sandboxctl hides the details of creating a functional chroot sandbox on all supported operating systems; in some cases, like building a NetBSD sandbox using release sets, things are easy; but in others, like on macOS, they are horrifyingly difficult and brittle.

Storytelling time

pkg_comp’s history is a long one. pkg_comp 1.0 first appeared in pkgsrc on September 6th, 2002 as the pkgtools/pkg_comp package in pkgsrc. As of this writing, the 1.x series are at version 1.38 and have received contributions from a bunch of pkgsrc developers and external users; even more, the tool was featured in the BSD Hacks book back in 2004.

This is a long time for a shell script to survive in its rudimentary original form: pkg_comp 1.x is now a teenager at its 14 years of age and is possibly one of my longest-living pieces of software still in use.

Motivation for the 2.x rewrite

For many of these years, I have been wanting to rewrite pkg_comp to support other operating systems. This all started when I first got a Mac in 2005, at which time pkgsrc already supported Darwin but there was no easy mechanism to manage package updates. What would happen—and still happens to this day!—is that, once in a while, I’d realize that my packages were out of date (read: insecure) so I’d wipe the whole pkgsrc installation and start from scratch. Very inconvenient; I had to automate that properly.

Thus the main motivation behind the rewrite was primarily to support macOS because this was, and still is, my primary development platform. The secondary motivation came after writing sysbuild in 2012, which trivially configured daily builds of the NetBSD base system from cron; I wanted the exact same thing for my packages.

One, two… no, three rewrites

The first rewrite attempt was sometime in 2006, soon after I learned Haskell in school. Why Haskell? Just because that was the new hotness in my mind and it seemed like a robust language to drive a pretty tricky automation process. That rewrite did not go very far, and that’s possibly for the better: relying on Haskell would have decreased the portability of the tool, made it hard to install it, and guaranteed to alienate contributors.

The second rewrite attempt started sometime in 2010, about a year after I joined Google as an SRE. This was after I became quite familiar with Python at work, wanting to use the language to rewrite this tool. That experiment didn’t go very far though, but I can’t remember why… probably because I was busy enough at work and creating Kyua.

The third and final rewrite attempt started in 2013 while I had a summer intern and I had a little existential crisis. The year before I had written sysbuild and shtk, so I figured recreating pkg_comp using the foundations laid out by these tools would be easy. And it was… to some extent.

Getting the barebones of a functional tool took only a few weeks, but that code was far from being stable, portable, and publishable. Life and work happened, so this fell through the cracks… until late last year, when I decided it was time to close this chapter so I could move on to some other project ideas. To create the focus and free time required to complete this project, I had to shift my schedule to start the day at 5am instead of 7am—and, many weeks later, the code is finally here and I’m still keeping up with this schedule.

Granted: this third rewrite is not a fancy one, but it wasn’t meant to be. pkg_comp 2.0 is still written in shell, just as 1.x was, but this is a good thing because bootstrapping on all supported platforms is easy. I have to confess that I also considered Go recently after playing with it last year but I quickly let go of that thought: at some point I had to ship the 2.0 release, and 10 years since the inception of this rewrite was about time.

The launch of 2.0

On February 12th, 2017, the authoritative sources of pkg_comp 1.x were moved from pkgtools/pkg_comp to pkgtools/pkg_comp1 to make room for the import of 2.0. Yes, the 1.x series only existed in pkgsrc and the 2.x series exist as a standalone project on GitHub.

And here we are. Today, February 17th, 2017, pkg_comp 2.0 saw the light!

Why sandboxctl as a separate tool?

sandboxctl is the supporting tool behind pkg_comp, taking care of all the logic involved in creating chroot-based sandboxes on a variety of operating systems. Some are easy, like building a NetBSD sandbox using release sets, and others are horrifyingly difficult like macOS.

In pkg_comp 1.x, this logic used to be bundled right into the pkg_comp code, which made it pretty much impossible to generalize for portability. With pkg_comp 2.x, I decided to split this out into a separate tool to keep responsibilities isolated. Yes, the integration between the two tools is a bit tricky, but allows for better testability and understandability. Lastly, having sandboxctl as a standalone tool, instead of just a separate code module, gives you the option of using it for your own sandboxing needs.

I know, I know; the world has moved onto containerization and virtual machines, leaving chroot-based sandboxes as a very rudimentary thing… but that’s all we’ve got in NetBSD, and pkg_comp targets primarily NetBSD. Note, though, that because pkg_comp is separate from sandboxctl, there is nothing preventing adding different sandboxing backends to pkg_comp.

Installation

Installation is still a bit convoluted unless you are on one of the tier 1 NetBSD platforms or you already have pkgsrc up and running. For macOS in particular, I plan on creating and shipping a installer image that includes all of pkg_comp dependencies—but I did not want to block the first launch on this.

For now though, you need to download and install the latest source releases of shtk, sandboxctl, and pkg_comp—in this order; pass the --with-atf=no flag to the configure scripts to cut down the required dependencies. On macOS, you will also need OSXFUSE and the bindfs file system.

If you are already using pkgsrc, you can install the pkgtools/pkg_comp package to get the basic tool and its dependencies in place, or you can install the wrapper pkgtools/pkg_comp-cron package to create a pre-configured environment with a daily cron job to run your builds. See the package’s MESSAGE (with pkg_info pkg_comp-cron) for more details.

Documentation

Both pkg_comp and sandboxctl are fully documented in manual pages. See pkg_comp(8), sandboxctl(8), pkg_comp.conf(5) and sandbox.conf(5) for plenty of additional details.

As mentioned at the beginning of the post, I plan on publishing one or more tutorials explaining how to bootstrap your pkgsrc installation using pkg_comp on, at least, NetBSD and macOS. Stay tuned.

And, if you need support or find anything wrong, please let me know by filing bugs in the corresponding GitHub projects: jmmv/pkg_comp and jmmv/sandboxctl.


February 09, 2017

BSD Talk bsdtalk266 - The nodes take over
We became tired of waiting. File Info: 7Min, 3MB. Ogg Link: https://archive.org/download/bsdtalk266/bsdtalk266.ogg

January 22, 2017

Emile Heitor CPU temperature collectd report on NetBSD

pkgsrc’s collectd does not support the thermal plugin, so in order to publish thermal information I had to use the exec plugin:

1
2
3
4
5
6
LoadPlugin exec
# more plugins

<Plugin exec>
Exec "nobody:nogroup" "/home/imil/bin/temp.sh"
</Plugin>

And write this simple script that reads CPUs temperature from NetBSD’s envstat command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat bin/temp.sh 
#!/bin/sh

hostname=$(hostname)
interval=10

while :
do
envstat|awk '/cpu[0-9]/ {printf "%s %s\n",$1,$3}'|while read c t
do
echo "PUTVAL ${hostname}/temperature/temperature-zone${c#cpu} interval=${interval} N:${t%%.*}"
done
sleep ${interval}
done

I then send those values to an influxdb server:

1
2
3
4
5
6
LoadPlugin network
# ...

<Plugin network>
Server "192.168.1.5" "25826"
</Plugin>

And display them using grafana:

grafana setup
NetBSD temperature in grafana

HTH!