Wednesday, October 23, 2019

Emacs part 3

I have removed vim from my system.  Not because I am completely sold on emacs, but because I keep typing vim out of habit and I want to train myself to remember to type emacs.

My ~/.emacs file is growing.  So far I have this in it:
(package-initialize)

(require 'package)

(add-to-list 'package-archives '("melpa" . "http://melpa.org/packages/"))
(package-initialize)


;;; Prevent tabs and use 4 space tab stop

(setq-default indent-tabs-mode nil)
(setq-default tab-width 4)
(setq-default indent-line-function 'insert-tab)

;;; Put backup files in one place

(setq-default backup-directory-alias '(("" . "~/.emacs.d/backup")))
I have not done much to my configuration and that continues to be deliberate.  I have learned some new commands based on things I would frequently do in vim:

  • Ctrl-x i
Like ":r FILENAME" in vim, inserts the named file at the current cursor position.
  • Meta-g Meta-g NUMBER
Like ":NUM" in vim to go to a specific line number.  You type the number after the keystrokes and press Enter.

The Meta key is mapped to Alt on my keyboard despite there being a perfectly suitable Meta key.  I may remap it later.

Things I am still trying to figure out in emacs:
  • How to disable automatic formatting.  In vim I only had this enabled for certain types of source code files.  I want to turn it off by default in emacs.
  • Something like "set bg=dark" in vim.
  • How to show trailing whitespace on lines as dark grey characters.  In vim, I displayed trailing spaces as dark grey periods and trailing tabs as dark grey |_ blocks.  This has to be possible in emacs.
And the day continues.

Friday, October 18, 2019

Emacs part 2

Well, it's not really day 2 but close enough.  I have been using emacs mostly for some days now and feel more comfortable with it.  In fact, I am writing this in emacs now.  The day 1 log was written in vim.

I am still making minimal configuration changes to emacs and am not using the vi helper mode.  I want to learn the emacs keystrokes rather than rely on my vim muscle memory.  This has been difficult, but is getting easier.  So what have I learned?

First, I found a one page emacs cheatsheet that I printed out and put on my desk.  It is easier than searching the web each time I want to remember a keystroke.  It does use emacs jargon for some things, which may be frustrating to some, but I am also trying to get used to that. For example, emacs use of the word buffer or kill.

I have successfully gotten used to opening files, saving files, and quitting the program.  Three things I consider essentially to using an editor.  I have also been using the forward and backwards search capabilities.  One thing I have not quite figured out how to do is delete an entire line similar to how vim did it, but there may not be a direct way.

In vim, I can type dd in command mode and it will delete the current line regardless of where the cursor position is on that line.  The line is removed and the entire file shifts up.  I have found Ctrl+k in emacs which has similar but slightly different behavior.  If I am in column 0, it deletes the entire line but leaves the newline in place. If I am in any other column, it deletes the line and brings the next one up to join where the newline was.  I may not be completely
understanding what's happening, but it's not a deal breaker.

I would like to make better use of emacs buffers, but that will come as I continue to use it day to day.  Until the next log entry... Ctrl+x Ctrl+c

rpminspect-0.8 released (and a new rpminspect-data-fedora)

Work on the test suite continues with rpminspect and it is finding a lot of corner-case type runtime scenarios.  Fixing those up in the code is nice.  I welcome contributions to the test suite.  You can look at the tests/test_*.py files to see what I'm doing and then work through one inspection and do the different types of checks.  Look in the lib/inspect_NAME.c file and for all of the add_result() calls to figure out what tests should exist in the test suite.  If this is confusing, feel free to reach out via email or another means and I can provide you with a list for an inspection.

Changes in rpminspect-0.8:

  • Integration test suite continues to grow and fix problems.

  • The javabytecode inspection will report the JAR relative path as well as the path to the embedded class file when a problem is found. (#56)

  • libmandoc 1.14.5 API support. rpminspect will continue to work with 1.14.4 and previous releases and will detect which one to use at build time. The mandoc API changed completely between the 1.14.4 and 1.14.5 release. This is not entirely their fault as we are using it built as a shared library and the upstream project does not officially do that.

  • rpminspect now exits with code 2 when there is a program error. Exit code 0 means inspections passed and exit code 1 means there was at least one inspection failure. (#57)

  • If there is a Python json module exception raised in the test suite, print the inspection name, captured stdout, and captured stderr. This is meant to help debug the integration test suite.

  • Fix the Icon file check in the desktop inspection. Look at all possible icon path trees (set in rpminspect.conf). Also honor the extensionless syntax in the desktop file.

  • Fix the Exec file check in the desktop inspection so it honors arguments specified after the program name.

  • Fix a SIGSEGV when the before and/or after arguments on the command line contain ".." in the pathspec.

  • [MAJOR] Fix fundamental problems with the peer detection code. The integration test suite caught this and was leading to false results.

  • Add the IPv6 function blacklist check. The configuration file can carry a list of forbidden IPv6 functions and raise a failure if it finds any of those used.

Changes in rpminspect-data-fedora-0.6:
  • Change bytecode version to be JDK 8
  • Add desktop_icon_paths to rpminspect.conf

Many thanks to the contributors, reporters, and testers.  I am continuing on with the test suite work and new inspections.  Keep the reports coming in.

Monday, October 14, 2019

Emacs - Day 1

Day 1 of giving Emacs a try.  After 20+ years of using vim, I wanted to improve my development environment and general workflow at the office.  I found Emacs OrgMode and thought it sounded interesting. The last time I used Emacs was in the mid 90s or so and I couldn't figure out how to exit and got frustrated.  Everyone I knew used a form of vi, so those were the questions they could answer.  And here I am all these years later in vim.  Would I actually like Emacs if I gave it a fair try?  Time to try it out.

DAY ONE

I started by installing the emacs package on my workstation:
dnf install -y emacs
OK, that was easy.  Now run 'emacs' from the terminal and...it launches a graphical program.  I wanted it to run in the console. Searching around, I find the way to do this is to run 'emacs -nw'. You can also install the emacs-nox package, but I might want the graphical one so I will just use the command line option for now.  I added an alias to ~/.zshrc:
alias emacs='emacs -nw'
Easy.  Now to get down to business.  My .vimrc file is not particularly large and I have never really been comfortable with VimScript.  Without getting in to that now, I decided to just use the default Emacs configuration to edit and save some source files.  This also has the advantage of making me learn the Emacs defaults.

I opened a C source file and made changes.  I did have to go and look up how to exit.  I used Ctrl+x Ctrl+c and it asked me to save, so I said yes.  I hope I get better at this.

So, some success.  I will admit I had a lot of unexpected "i" characters appearing throughout the files I edited, but I would expect that right now.  I want to go through my .vimrc and figure out what I want ported over to the Emacs configuration.

Wednesday, October 9, 2019

What? You're using Emacs?

I have been a vim user for a long time, but I'm also lumping in various varieties of 'vi' in with that.  For most of the time I have used vim, but there were times where I used nvi or elvis or the vi that came with some variant of a commercial Unix operating systems.  But why?

Mostly because it was the first editor someone showed me how to use and then it just sort of went from there.  I maintain that the vi vs. emacs argument is flawed because both programs require training yourself over a long period of time.  There is nothing inherently intuitive about either of them.

I'm going to say that I've been a vim user for 20 years.  I started using it before that, but those years were a mix of nvi, elvis, and vim.  And I'm going to subtract some years from that because I didn't make effective use of my .vimrc until later.

I have never given Emacs a real shot.  Mostly because I didn't want to invest all the time in to learning another program that was still going to leave me with just another editor.  And for a long time I didn't care.  But recently I have found myself wanting more out of my development environment.  I have tried many vimscript plugins and external tools and they are nice, but I still have about a zillion vim sessions open and it never quite feels well integrated.  Internet tells me that people have nice Emacs setups for development, so maybe I should try that.

OK, I'll do it.

About a month ago I looked in to doing this and then about 3 weeks ago I decided to make it happen for real.  On my workstation at the office.  I installed emacs and removed vim.  When I type 'vim' now, I get command not found.  I forced myself to use emacs as-is without modifying the configuration file for the first week.  I made notes on paper and focused on the basic commands and training myself out of vim muscle memory.

I have also been writing up blog posts to post later as I have moved over to Emacs.  I am still using it and I like it.  I will post my Emacs entries later, but for now I will post a link to my now growing .emacs file.

Tuesday, October 8, 2019

rpminspect-0.7 released, bug fixes and a new integration test suite

rpminspect-0.7 has been released.  The main things in this release are a new integration test suite and many bug fixes.  There is one new user feature and that's the -t or --threshold option.

The -t option lets you control the result code that triggers a non-zero exit code from rpminspect.  By default, this is set to VERIFY.  But you could set it to BAD or INFO or any other valid result code in the program.  The result code specified by this option means that any result in rpminspect at that code or higher will trigger a non-zero return code.  Combined with the -T option, this can be a useful tool for some types of CI system integration.

As always, thank you for the bug reports and pull requests!  The next release will likely take the same amount of time and will continue to focus on stabilization with the test suite.

This release is available in my Copr rpminspect repo as well as for rawhide and F31.  Please let me know if you have any questions.

Thursday, September 19, 2019

rpminspect-0.6 released with new inspections and bug fixes

There are three new inspections implemented in rpminspect-0.6:
  • The upstream inspection compares SRPMs between before and after builds to determine if the Source archives changed, were removed, or new ones added.  Anything listed as a Source file in the spec file is examined and not just tarballs.  Source file changes when the package Epoch and Version do not change are considered suspect and need review.
  • The shellsyntax inspection looks at shell scripts in source and binary packages and runs them through the syntax validator for the indicated shell (the -n option on the shell command).  The shells that rpminspect cares about are in the shells list in the rpminspect.conf file.  This inspection reports scripts that fail the syntax validator or scripts that were good but are now bad.  If you had a bad one and it's now good, you are notified only.
  • The ownership inspection enforces some file owner and group policies across builds.  The rpminspect.conf settings of bin_owner, bin_group, forbidden_owners, and forbidden_groups are all used by this inspection.  A typical use of this inspection is to ensure executables are correctly owned and that nothing is owned by mockbuild.
This release also includes a lot of bug fixes.  I really appreciate all of the feedback users have been providing.  It is really helping round out the different inspections and ensure it works across all types of builds.

For details on what is new in rpminspect-0.6, see the release page.

There is also a new release of rpminspect-data-fedora which includes changes necessary for the new inspections.  See its release page for more information.

Both packages are available in my Copr repo.  I am doing Fedora builds now, which includes Fedora 31.  If you want another release of Fedora to have builds, let me know.

Sunday, September 8, 2019

github notifications

I have noticed that a number of projects I have on github have stopped notifying me.  I have not yet found a pattern or missed setting, but it is possible I am looking in the wrong place.  I do get notifications for some projects, just not all.  And this goes across projects that are under my personal account as well as projects that I am a member of but exist elsewhere.

Has anyone seen this with github?  Any tips on receiving consistent notifications?  I am wanting to receive notifications of Issues (both new issues and comments posted), Pull Requests, commits, tags, and releases.

Friday, September 6, 2019

rpminspect-0.5 released, two new inspections and some bug fixes

[I would have posted this roughly 18 hours ago when I made the release, but the fire alarm went off in the office and, well, that sort of had this wait until today.]

rpminspect-0.5 is now available.  The releases are noted on the github project page.  I uploaded the tarball there.  I have done builds in Fedora rawhide and the f31 branch.  For other releases, you will need to use the Copr repos.

Bug fixes and general improvements:
  • Support running rpminspect on local RPM packages (#23). You may now specify a local RPM or SRPM as the input for rpminspect. If you specify a before and after file, rpminspect will assume they are peers and will perform applicable inspections.
  • Adjust the 'text' output mode by adding some extra blank lines for readability.
  • For the 'changedfiles' inspection, get the list of possible C and C++ header file endings from the header_file_extensions setting in rpminspect.conf.
  • Add dnf instructions to the README file to help get the development packages installed on Fedora or RHEL.
  • Prevent a crash in get_product_release() when the build specification lacks enough information to infer a product release (e.g., a Koji build ID).
  • Start an integration test suite in the tests/ subdirectory.
  • Adopt a Code of Conduct for the project, see CODE_OF_CONDUCT.md
  • Move the data/setuid subdirectory to data/stat-whitelist. The files will be installed to /usr/share/rpminspect and the stat-whitelist subdirectory provides information on file modes, owners, and groups for known setuid/setgid files.
  • Process a [vendor-data] section in the configuration file which contains paths to locations provided by the rpminspect data package.
  • Fix configuration file detection in rpminspect.
New inspections:
  • removedfiles

    Only runs when you are comparing a before and after build.  The general inspection is to report when files were removed from a package.  That is, it was present in the before build, but gone in the after build.  There are some additional checks and reporting:

    • If the removed file was an ELF shared object, rpminspect reports it as a RESULT_BAD noting it may be a potential ABI break.
    • Files removed from a security path prefix are also marked as RESULT_BAD and as WAIVABLE_BY_SECURITY. All other removals are reported as RESULT_VERIFY.

    The security path prefixes are set in the rpminspect.conf file in the security_path_prefix setting.  This is the sort of thing that changes over time as well as varying across similar products.
  • addedfiles

    Kind of like the opposite of removedfiles, but does a little more. The main thing is to catch any accidental additions to packages as well as additions to security path prefixes:

    • Ignore the debuginfo and debugsource paths.
    • Ignore anything with .build-id/ in the path.
    • Ignore Python .egg-info files since these come and go and sort of always change.
    • Report files added to /tmp or /var/tmp path prefixes as RESULT_BAD.  The forbidden_path_prefixes list can be set in the rpminspect.conf file.
    • Report files added that end with ~ or .orig as RESULT_BAD.  The forbidden_path_suffixes list can be set in the rpminspect.conf file.
    • Report any __MACOSX, CVS, .svn, .hg, or .git directory added as RESULT_BAD.  The forbidden_directories list can be set in the rpminspect.conf file.
    • Any files added to a security path prefix are reported as RESULT_VERIFY and WAIVABLE_BY_SECURITY.
    • Any setuid and/or setgid file added is reported as RESULT_INFO if the file is on the stat-whitelist and the expected permission mode matches.  If they do not match or the file is not on the stat-whitelist, it is reported as RESULT_VERIFY and WAIVABLE_BY_SECURITY.

    Most of the settings for this inspection are in rpminspect.conf, with the exception of the stat-whitelist which is per product release and comes from the data package.
There is also a new rpminspect-data-fedora release which contains an updated rpminspect.conf file with the new sections and settings.  It also contains the stat-whitelist subdirectory with files for recent Fedora releases.

In the addedfiles inspection, the check for __MACOSX subdirectories is deliberate and comes from the ancestor of rpminspect.  In theory, you could build noarch RPMs on MacOS X and then install those on Fedora.  In those cases, we want to make sure you do not accidentally package up a __MACOSX subdirectory.

Builds are available in Copr as well as the f31 and rawhide branches.

Monday, August 26, 2019

rpminspect-0.4 released, now with changed file reporting

rpminspect-0.4 is now out.  I have built it on rawhide and for F-31.  My Copr repo has builds for previous releases.  There are a number of fixes and improvements in this release.

Issues fixed:
  • Support multiple buildhost subdomains in rpminspect.conf (#25).
In Fedora, the s390x packages are built on hosts provided by Red Hat's internal mainframe. These have a buildhost subdomain of .bos.redhat.com while the other architectures carry .fedoraproject.org. The buildhost_subdomain parameter in rpminspect.conf now supports multiple subdomains separated by spaces.
  • Add more usage information to the README (#24)
Give more examples on how to use rpminspect at the command line.
  • Add support for specifying a list of architectures on the command line (#27)
This is similar to the koji command line option to restrict builds to a subset of architectures. List architectures as a string separated by commas. "noarch" is valid since RPM recognizes that. To note the SRPM, use "src" as the architecture. An example: -a x86_64,ppc64le,src
  • Split the -T option out in to -T and -E options (#28)
The biggest issue here was my use of '!' to specify excluded tests. I have now split the option out in to -T to specify tests to run -or- the -E option to specify tests to not run. The options are mutually exclusive and the default mode for rpminspect is to run all applicable tests. If you specify -T, rpminspect disables all tests except the ones you specify with this option. You can use 'ALL' with the -T option if you want to, but that is the default behavior. If you specify -E, rpminspect enables all tests except the ones you specify with this option. If you use 'ALL' with the -E option, all tests are disabled and rpminspect becomes a no-op.
New functionality:
  • The 'changedfiles' inspection is new and does quite a bit. More on that below.
  • The rpminspect.conf file now carries the security_path_prefix setting to list path prefixes where security related files reside.
  • The fetch only mode writes the downloaded Koji build to an NVR subdirectory rather than the temporary directory structure rpminspect would use internally.
The changefiles inspection first has some restrictions:
  • It skips non-regular files.
  • It skips any files lacking a peer.  There are separate inspections that will handle new, removed, and moved files.
  • It skips Java class files and jar files (these will be handled by other inspections).
With those conditions met, it does this:
  • If a file is gzip'ed, it runs zcmp(1) on the peers and reports if there are changes.
  • If a file is bzip2'ed, it runs bzcmp(1) on the peers and reports if there are changes.
  • If a file is xz'ed, it runs xzcmp(1) on the peers and reports if there are changes.
  • If the file is an ELF object, it runs eu-elfcmp(1) on the peers and reports changes.
  • If the file is a gettext message catalog, it unpacks the catalogs to temporary files and runs 'diff -u' on the output and reports changes.
  • If the file is a C or C++ header file, it preprocesses the peers to remove comments and runs 'diff -uw' on the output and reports changes.
  • Not meeting any of the above special cases, rpminspect compares the SHA-256 digests of the peers and reports if they are different.
There are some additional checks in place, such as reporting changed files that a prefixed by a security path prefix as defined in the configuration file.  This is meant to catch files that change between packages and need approval from a security team or group.  rpminspect reports these changes with WAIVABLE_BY_SECURITY_TEAM and RESULT_BAD.  Other changes are reported as RESULTS_VERIFY.

This inspection does more than its ancestor did.  For one, the compressed file check was interesting because I like how it works for changing compression levels and just looks at the uncompressed content.  However, only gzip was handled so I added the others.  Also, the digest check occurred before any of the special case handling which resulted in no detailed feedback for those changes.

I look forward to hearing feedback as well as bug reports.  I am continuing on the file comparison inspections to get that rounded out.

Builds are available in Copr, rawhide, and F-31.

Monday, August 19, 2019

rpminspect-0.3 released

Released rpminspect-0.3 today with bugs reported and fixed during Flock Budapest 2019.  Builds are in my Copr repository:

https://copr.fedorainfracloud.org/coprs/dcantrel/rpminspect/

I am building it in rawhide now.  This biggest fix in this release is the correct handling of hard links when extracting RPM payloads.

I have also moved all of the upstream rpminspect projects over to the rpminspect organization in github:

https://github.com/rpminspect/


rpminspect Presentation at Flock 2019

Flock in Budapest was a great event.  There were a lot of talks I wanted to attend, but could not make it to all of them.  I did give one talk on my project called rpminspect.

rpminspect is a project I started as a replacement for an internal Red Hat tool.  I am working on integrating it in to the build workflow for Fedora but also allow package maintainers to use it locally as a build linter of sorts.  Here is a link to the presentation I gave.  I think there is video, but I am not sure where those are.

You can find rpminspect on my github page, but I am moving it and related repos over to the rpminspect organization in github.

Feedback was great for rpminspect and while there I fixed a number of bugs.  I have automatic builds in Copr right now and I am also building released versions in rawhide.

I will do a number of other posts on rpminspect after this one, but I wanted to get this posted for two reasons.  First, to link to the rpminspect organization and presentation.  And second, to test my Fedora Planet connectivity.

Tuesday, April 30, 2019

USB Flash Media on Linux

USB flash media is one of the most useful devices to come along in recent computer history.  Gone are the days of floppy diskettes, CD-R media, and DVD-R media.  While those were fine, USB flash media has just made things much easier.  Our best interim solution in the 90s was the Iomega Zip drive (full disclosure: one still sits on my desk and I use it for sneaker-netting to vintage systems that have SCSI and a Zip drive).

OK, so there are some things to know when using USB flash media on Linux.  For me, the two common things I do are boot ISO images and copy files between systems.  Usually old Windows systems that I use for programming old two-way radio systems.

In the case of booting ISO images, you just need to dd the file to the block device.  Let's say I insert flash media and see in dmesg that it's /dev/sdc.  I would write out the ISO image using this command:

dd if=FUN_OPERATING_SYSTEM.iso of=/dev/sdc status=progress oflag=sync bs=4096

You may need to use the mkhybrid command on some Linux ISO images to make sure it loads from USB media.  Do that on the ISO file before running dd.  So what do the options do?
  • if is the "input file".  This is what I want to write to USB.
  • of is the "output file".  Where is it going?  In this case, a block device called /dev/sdc.
  • status=progress is a GNU feature that shows some status information during the transfer.
  • oflag=sync means "really write it to the block device".
  • bs=4096 means write 4k at a time.  By default, dd will just do one byte at a time.  For an 8GB ISO, that can take a while.  I have successfully set a 32k block size and dd performs just fine, so I do bs=32768
Great, now you can write out an ISO.  How about file transfer?  In my case, I stick with the FAT32 filesystem.  Even non-DOS operating systems can read and write it.  First, you need to partition the USB device and create a single partition as partition 1.  Alternatively you can format the device on Windows and it will do all of that for you.  Create a FAT32 filesystem on the first partition of the device with a command like:

mkfs.dos -F 32 /dev/sde1

Now mount it:

mount -o flush /dev/sde1 /mnt

What is "-o flush"?  This tells the kernel to sync data to the device when it's idle.  Other filesystems tend to call this "-o sync", but the distinction usually for FAT32 is that it will simply sync data earlier.  Usually the "sync" option for the filesystem means sync during the write which can cause it to take a long time to write.

The "-o flush" option also means when you unmount the device it will most likely be ready to remove.  This is also way faster than running sync before unmounting.  I've also heard urban legends of the sync command destroying the life of USB flash media, so I guess there's that.

If you have ever waited a long time to dd something to a USB flash device, try some of the above options for dd and mounting.  It makes the devices much nicer to work with on Linux.

Oh Geez Windows - Part 2

I never could get the ThinkPad X1 Carbon 6th generation laptop to install and run Windows 7.  USB 3.0 and NVMe devices were just too new for it.  What I ended up doing was buying a ThinkPad X230 off eBay and setting it up with Windows 7.  I them proceeded to try and get both versions of Motorola CPS working on that system.

First, MOTOTRBO CPS.  This was simple and worked very easily.  I did have to reinstall and add the wideband entitlement key after installing, but otherwise I am now able to program the XPR7550 radios just fine.

Second, Waris CPS.  This installed and ran fine, surprisingly.  However the X230 lacks a real serial port.  What I found that works is the StarTech ICUSB2321F.  This device uses the FTDI chipset for the serial port, which is reliable enough to get CPS to work.  So no more T23 for Waris programming.

I disabled all of the automatic updating in Windows since I don't care and I don't want to be surprised by this thing suddenly upgrading itself to Windows 10.

What I've learned through all this is that it's just easiest to use the platform Motorola recommends for CPS and not fight it.  A used laptop that meets the requirements is far less than the cost of any Motorola product you're trying to program.

Sunday, April 7, 2019

Oh Geez Windows

I don't use Microsoft Windows.  The last time I had a serious installation of Windows that I actually used was Windows NT 4.0.  I have no relevant Windows knowledge.

But I need Windows to run the Motorola CPS software to program two-way radios.  There's no alternative and Motorola Solutions is not a great software house.  Their software is generally quite terrible and extremely picky about its runtime environment.  I have two main versions of CPS that I need to run.  Because Motorola doesn't just make one version of CPS for all radios, the version you need is tied to your radio series.  Great.

The first version I need is for the Waris series radios, which are now discontinued.  This is the HT750 and HT1250 (among others).  This is a Windows program and uses the RIB device to speak to the radio through an actual no-shit serial port.  The versions of Windows it can run on are limited, but I am successfully using it on Windows XP.  Now, since it needs a real actual no-shit serial port (no USB adapters!), I need a system that (a) can run this version of Windows and (b) has a real serial port.  For this version of CPS, I settled on a ThinkPad T23 off eBay.  It runs CPS effectively enough to program these radios.

The second version I need is for the MOTOTRBO series radios.  I have the XPR7550, for instance.  Fortunately this requires an entirely different programming cable (this time it's USB!) and different software.  The recommended version of Windows is 7.

The T23 is out of the question for Windows 7, so I have drafted a spare work laptop in to temporary duty just for programming the radios.  After I program them, I plan to wipe the laptop and find a more permanent solution.  Unfortunately this has become much more difficult than I wanted.

The laptop I'm trying to install Windows 7 on is a 6th generation ThinkPad X1 Carbon.  It has USB 3.0 and NVMe storage.  These are things Windows 7 knows nothing about natively, which means that even though I have been able to figure out a way to create USB boot media for Windows 7 and boot it, it can't see what it booted from nor can it load any drivers from anywhere because the only way to sneakernet drivers in this early is through USB 3.0.

Ugh.

Googling around reveals some methods to update your Windows 7 install media and add drivers.  I tried this from the T23 on Windows XP, but the updating steps have to run from Windows 7.  Come on.

I have wasted a lot of time on this, but the plan at the moment is:
  • Install a virtual guest on my Linux workstation and install Windows 7.
  • Install OpenSSH and rsync in that virtual guest.
  • rsync over the Windows 7 install ISO to the Windows 7 guest.
  • Follow the steps to use the DISM program to modify the boot media to add in the USB 3.0 drivers and the NVMe drivers.  I have no idea if this will work.
  • Save the resulting changes back to the ISO.
  • rsync the ISO back to the Linux host.
  • dd out the modified ISO to flash media.
  • Boot the new ISO on the ThinkPad X1 Carbon and proceed with installation.
  • Install the MOTOTRBO software and program the radios.
At this point I am on the 3rd item.

Saturday, March 2, 2019

Memory Allocation in C - Followup

My previous post talked about memory management in C and I described how I like to use the assert() function.  What I did not note is that assert() functionality can be disabled by defining NDEBUG in the program or passing -DNDEBUG to the compiler.  That's a problem if you embed a function call in the expression that you wrap in assert.  Let's look at an example.

Take this silly example:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <assert.h>

int main(int argc, char **argv) {
    char *s = NULL;
    char *data = "this is a string";

    assert((s = calloc(1, BUFSIZ)) != NULL);
    assert((s = strncpy(s, data, strlen(data))) != NULL);
    printf("s: |%s|\n", s);
    free(s);

    return EXIT_SUCCESS;
}


When compiled, it should probably just print "s: |this is a string|", right?  Obviously in a completely unnecessary and complicated way.  But that's still what it will do.  Let's try (I have saved this code to a file called foop.c):

$ gcc -O0 -g -Wall foop.c
$ ./a.out
s: |this is a string|

It's a miracle!  It worked.  But what happens if we compile it with -DNDEBUG?

$ gcc -O0 -g -Wall -DNDEBUG foop.c
foop.c: In function main:
foop.c:9:11: warning: unused variable data [-Wunused-variable]
     char *data = "this is a string";
           ^~~~


Uh oh.  What happens when we run it?

$ ./a.out
s: |(null)|

Yeah, that's wrong.  So we need to make sure the same thing happens whether or not we compile with -DNDEBUG, so let's rewrite it a bit:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <assert.h>

int main(int argc, char **argv) {
    char *s = NULL;
    char *data = "this is a string";

    s = calloc(1, BUFSIZ);
    assert(s != NULL);

    s = strncpy(s, data, strlen(data));
    assert(s != NULL);

    printf("s: |%s|\n", s);
    free(s);

    return EXIT_SUCCESS;
}


Now let's try to compile it:

$ gcc -O0 -g -Wall foop.c        
$ ./a.out
s: |this is a string|


That checks out.  Now with -DNDEBUG:

$ gcc -O0 -g -Wall -DNDEBUG foop.c
$ ./a.out                        
s: |this is a string|


That's more like it.  assert() is useful for developers and I try to make use of it a lot during development, but remember to avoid embedding expressions in what you wrap in assert().  Relying on side effects leads to problems and assert() can't help with that.