This article discusses a way how to create and use a NuGet Package for Libraries which are not yet stable and which are still under active development.

Especially, the core idea is to develop such an library and a project consuming this library in an efficient way.

Concept

As we develop on Windows with Visual Studio, NuGet is a powerful tool to manage library dependencies in Projects, for external 3rd Party libraries, as well as for our own libraries. Using Git our alternatives would be:

  • Git Submodules – with their known problems (detached head, multiple commits, etc.)
  • cmake – vs. Visual Studio solutions
  • manual paths / zip packages – manual work

Using NuGet seems the way to go for Windows-Only projects. Even for native c++ components. (Side note: coapp is dead and is not missed.)

The Problem

However: if we have a library component and a consuming project, which we have to actively develop both at the same time, the NuGet process gets prohibitively tiring. For each change in the library, we need to build the lib, repack the NuGet package, increasing its (build) version number, update the consuming project’s NuGet reference, and build the project… For every tiny typo.

The Idea

The core idea is to prepare the NuGet package in such a way, that a local override of the package’s content is possible. Think of an environment variable holding the file system path to the library’s development directory. If this environment variable is not set, the content of the NuGet package is use. If the environment variable is set, then include paths and library paths are set to directly use the files found in that development directory (obviously, resulting in an unclean in-development version of the project).

Since native NuGet packages simply use an MSBuild script to integrate themselves into a project, we have (almost) all the possibilities MSBuild has to offer.

Set up the NuGet Package

Creating a native C++ library NuGet package without coapp is not very difficult, but out-of-focus of this article. For the sake of simplicity I only discuss the (second) easiest task: a static library. We need some header file and some static library files. All in all the whole file looks like this:

NugetDevPackageTest.testLib.nuspec

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
  <!--
    Create package:
    cd solution directory
    nuget pack NugetDevPackageTest.testLib.nuspec -Version 0.2.0-build001
    -->

  <!-- package metadata -->
  <metadata>
    <id>NugetDevPackageTest.testLib</id>
    <version>$version$</version>
    <title>NugetDevPackageTest testLib</title>
    <authors>Sebastian Grottel</authors>
    <projectUrl>https://bitbucket.org/sgr_faro/nuget-dev-package-test/overview</projectUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Nuget package of `someLib` to test dev nuget package concept</description>
    <tags>nugetdevpackagetest testLib native</tags>
  </metadata>

  <!-- content -->
  <files>
    <!-- include header files -->
    <file src="DemoClass.h" target="build\native\include" />

    <!-- static libraries -->
    <file src="lib\Win32\Debug\some.lib" target="build\native\lib\Win32\Debug" />
    <file src="lib\Win32\Release\some.lib" target="build\native\lib\Win32\Release" />
    <file src="lib\x64\Debug\some.lib" target="build\native\lib\x64\Debug" />
    <file src="lib\x64\Release\some.lib" target="build\native\lib\x64\Release" />

    <!-- build rules -->
    <file src="NugetDevPackageTest.testLib.targets" target="build\native" />

  </files>
</package>

There are two interesting aspects to highlight here:

  1. The static libraries can be collected from any directories. Important is that you specify the correct target directories. Follow the semantic rules NuGet specifies.
  2. The first build rule file, NugetDevPackageTest.testLib.targets, controls the integration of the NuGet Package’s content into the Visual Studio project. This file must have the same name as the package itself.

We set up the targets file as follows:

NugetDevPackageTest.testLib.targets

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <!-- If development directory is empty, use content from the package -->
  <PropertyGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' == ''">
    <__NugetDevPackageTest_testLib_IncludeDir>$(MSBuildThisFileDirectory)include/</__NugetDevPackageTest_testLib_IncludeDir>
    <__NugetDevPackageTest_testLib_LibrarayPath>$(MSBuildThisFileDirectory)lib\$(Platform)\$(Configuration)\some.lib</__NugetDevPackageTest_testLib_LibrarayPath>
  </PropertyGroup>

  <!-- If development directory is set, use the content found there -->
  <PropertyGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' != ''">
    <__NugetDevPackageTest_testLib_IncludeDir>$(NugetDevPackageTest_testLib_DevDir)/</__NugetDevPackageTest_testLib_IncludeDir>
    <__NugetDevPackageTest_testLib_LibrarayPath>$(NugetDevPackageTest_testLib_DevDir)\lib\$(Platform)\$(Configuration)\some.lib</__NugetDevPackageTest_testLib_LibrarayPath>
  </PropertyGroup>

  <!-- If development directory is set, define an additional macro -->
  <ItemDefinitionGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' != ''">
    <ClCompile>
      <PreprocessorDefinitions>HAS_NUGETDEVPACKAGETEST_TESTLIB_DEVDIR;%(PreprocessorDefinitions)</PreprocessorDefinitions>
    </ClCompile>
  </ItemDefinitionGroup>

  <!-- Compiler options: package macro and include paths -->
  <ItemDefinitionGroup>
    <ClCompile>
      <PreprocessorDefinitions>HAS_NUGETDEVPACKAGETEST_TESTLIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
      <AdditionalIncludeDirectories>$(__NugetDevPackageTest_testLib_IncludeDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
    </ClCompile>
    <ResourceCompile>
      <AdditionalIncludeDirectories>$(__NugetDevPackageTest_testLib_IncludeDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
    </ResourceCompile>
  </ItemDefinitionGroup>

  <!-- Linker options: library (full) path -->
  <ItemDefinitionGroup>
    <Link>
      <AdditionalDependencies>$(__NugetDevPackageTest_testLib_LibrarayPath);%(AdditionalDependencies)</AdditionalDependencies>
    </Link>
  </ItemDefinitionGroup>

</Project>

Create the NuGet package:

nuget pack NugetDevPackageTest.testLib.nuspec -Version 0.2.0-build002

(obviously, you adjust the version number as you like to.) and be happy that it already does work as you intended: if the environment variable NugetDevPackageTest_testLib_DevDir is set, the content of this folder is used instead of the NuGet package’s content.

Setting an environment variable and the need to restart Visual Studio any time you change it, this is obviously not great. We can improve on that by adding a GUI.

Visual Studio Property Page

We add an Visual Studio property page for your project to the NuGet package, allowing us to change the setting of the variable from within Visual Studio. For this we add an XML description of the values we want to change:

overrideSettings.xml

<?xml version="1.0" encoding="utf-8"?>
<ProjectSchemaDefinitions xmlns="clr-namespace:Microsoft.Build.Framework.XamlTypes;assembly=Microsoft.Build.Framework">

  <!-- Project property settings to override content of nuget development packages -->
  <Rule Name="ProjectSettings_NugetDevOverride" PageTemplate="tool" DisplayName="Nuget Development Override" SwitchPrefix="/" Order="1">

    <!-- Category to identify this package -->
    <Rule.Categories>
      <Category Name="NugetDevPackageTest_testLib" DisplayName="NugetDevPackageTest.testLib" />
    </Rule.Categories>

    <!-- Store override settings only in the user file -->
    <Rule.DataSource>
      <DataSource Persistence="UserFile" ItemType="" />
    </Rule.DataSource>

    <!-- Override development path -->
    <StringProperty Name="NugetDevPackageTest_testLib_DevDir"
      DisplayName="Development Directory"
      Subtype="folder"
      Description="Override content of Nuget package with content of this directory. If this field is empty (default) the content of the nuget package is used. This value is persistent as user setting only."
      Category="NugetDevPackageTest_testLib" />

  </Rule>
</ProjectSchemaDefinitions>

Note that the StringProperty directly stores it’s value in the variable we are already using. Make sure not to use this variable as an Environment variable anymore!

Now, we reference this property page from the other files of our NuGet package:

NugetDevPackageTest.testLib.nuspec

...
    <!-- build rules -->
    <file src="NugetDevPackageTest.testLib.targets" target="build\native" />
    <file src="overrideSettings.xml" target="build\native" />

  </files>
</package>

 

NugetDevPackageTest.testLib.targets

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <!-- include the gui property page to override the development directory -->
  <ItemGroup>
    <PropertyPageSchema Include="$(MSBuildThisFileDirectory)\overrideSettings.xml" />
  </ItemGroup>

  <!-- If development directory is empty, use content from the package -->
...

Now create the new version of the package. Update the package reference in your consuming project. You might need to restart visual studio once. And you should find the new property page in the settings of your project.

Use the NuGet Package or the local Version

Be Aware of all the pitfalls

  • Never commit the .user file of your projects to the repository!
  • If your project code requires local changes within the code usually packed into NuGet, your project will not compile correctly on any other machine!
    • Before you push, or at least before you create a pull request, you need to push and publish the changed NuGet project and you need to reference the NuGet package cleanly without override.
  • You need to use two Visual Studios running, one for your project and one for the NuGet-ted project.
    • If you run your project in the debugger, and you step into functions from the NuGet project, the corresponding source file is opened in the Visual Studio of your project. You can edit the files there, but of course you cannot compile the NuGet library here. Remember to switch to the right Visual Studio for that.

The Proposed Process

testApp – Visual Studio #1 testLib – Visual Studio #2 Comment
Clone testApp
Open testAppsolution in Visual Studio #1
Build testApp This fetches the NuGet package of testLib and builds the App with the content of that package, e.g. version 1.0.0
Clone testLib
Open testLib solution in Visual Studio #2
Build testLib
(Build testApp) Nothing changes. Still the NuGet package is used. Build should not do anything (up to date)
Open Project settings and override the nuget settings. Reference the directory where you cloned and built testLib
(Build testApp) Known Issue: Visual Studio still believes the project would be up to date
Rebuild testApp App is now built with the testLib built in your local directory!
(Build testApp) The project is up to date (which is correct)
Change some code and build testLib
Build testApp The project correctly detects that it needs to build (at least to link) again. The changes of the local testLib can directly be used.
You start testApp in the Debugger You can even enter code of testLib, look at local / private variables, etc. The corresponding source files of testLib are opened in Visual Studio #1!
You edit source of testLib WRONG Visual Studio!
You hit Build → testApp is up to date It is the wrong Visual Studio. You cannot build testLib there. You need to switch to the other Visual Studio
Visual Studio might detect the changes at the source files, if they are opened. If the files are not opened, everything is ok. If Visual Studio does ask you, select to keep changes made outside Visual Studio #2
Hit F5 (start Debugging) Binary and source code do not match! Visual Studio #1 should warn you about it if you try, e.g., to set a breakpoint in source code of testLib
Build testLib The testLib is update with the changes to the source code you made in Visual Studio #1
Hit F5 (start Debugging) Visual Studio #1 correctly detects the updates of testLib, builds again, and executes testApp with the newest version of testLib.
 

 

 

Develop your changes in testApp and testLib. Now you are finished and want to push a stable version to your CI system

Be sure testLib is correct in all configurations
Commit + Push testLib
Trigger built / update of new NuGet package (we call it Ver++)
Disable the override settings for the NuGet package
Update the testApp project to use the NuGet package Ver++ Ideally this package should be the result of your CI system.
Compile and testApp in all configurations Since you had the identical code tested before locally, this should not show any problems. If it does, you likely missed something before.
Commit + Push your final update of testApp This should be solely the increment of the NuGet package version
Build / Test your testApp on your CI system All should work fine.

Known Issues

Visual Studio Property Pages GUI

The Visual Studio GUI is sometimes not correctly updated. One such situation is created by the property page descriptions injected via NuGet packages.

If you initially add a NuGet package with such a property page, or if the NuGet package you are using changed its property page, Visual Studio might not show you the right proptery page (or might not show any property page).

If this happens, restart Visual Studio after installing / restoring the NuGet package and you should be fine.

Visual Studio’s Fast Up to Date Check misses Changes of the StringProperty

When in Visual Studio a build is triggered (e.g. by pressing F5 for a Debug start), Visual Studio does not invoke MSBuild to determine if the project is up to date. Instead is uses an internal mechanism to quickly decide if a rebuild is required. This internal mechanism, known as fast up to date check, does not perform all required checks. E.g., it misses changed to the paths defined by the variable defined within the property page GUI created by our NuGet package.

This means, whenever the value of this variable is changed, you should trigger a rebuild of the consuming project.

One workaround would be to disable the fast up to date check for this project. However, for large projects, like SCENE, this introduces more pain than the required rebuild.

cf.: https://stackoverflow.com/q/47516374/552373

Version Schizophrenia

There is an semantic problem: you reference a Nuget package version, e.g. 1.0.0. But you override the content with changed files. Those will eventually be published as a new version, e.g. 1.0.1. But until then, your consuming project will reference those as version 1.0.0.

Your build process must be aware of this issue.

You should introduce integration tests to make sure such information mismatch never enters release branches.

Deep Dependency Graph Version Conflict

Consuming the unstable library from the development directory, is a project setting, not a solution setting. If you have a deep dependency graph and you are consuming this library via multiple routes, the override setting only influences one single route! Especially, if you use another Nuget package which uses your lib nuget, you cannot do anything about it. You need to be carefully aware of potential binary compatibility issues in such a construct.

Workarounds could be based compile-time and run-time checks of version numbers and the “dirty-lib-flag” (cf. HAS_NUGETDEVPACKAGETEST_TESTLIB_DEVDIR).

And now there is Voro++ on Nuget. Voro++ is a library and utility, written by Chris Rycroft (with Applied Mathematics in Harvard SEAS) for compute Voronoi cells für 3D data. I use his lib actually to compute natural neighbors in my particle data sets. For me, Voro++ is better suited than other libs, like e.g. qhull, for several reasons: it is easy to use, streamlined for 3D, and natively support periodic boundary conditions. The single drawback I found, there are no official binary releases for Windows. And that is why I started this nuget package.

The library’s source code is rather clean and compiles without problem in Visual Studio. So all I did, was compiling the static library for the four latest Visual C++ versions, and I put the nuget package together. My Visual Studio solution and project files are also publicly available on bitbucket: https://bitbucket.org/sgrottel_nuget/voro

The package contains the static library, the header files, some files from the documentation, and the command line tool built with MSVC 2015. If you only want the command line tool and don’t care for the lib, simply download the nuget package, unzip it (yes, nuget packages are basically zip files), and you’ll find the exe in the tools subdirectory. For the full documentation of the library, go the Chris’ website or the original source code package.

What I didn’t include in the nuget package are any samples. But compiling them for yourself is as simple as it could be:

1. start a new C++ console application in visual studio.

Be sure to deactivate precompiled headers and to make the project empty.

voromsvc01

2. drag-drop-add any cc-example.

Just use any file from Chris’ source code package into the project.

voromsvc02

3. Add the voroplusplus nuget package.

voromsvc03

4. And you’re done! Compile it and run it by a push of a button.

Have fun!

WinForms De-Blurred

The Problem

We have projects working with WinForms GUIS. Different Developers work with different Computers, and these have different DPI settings. Every time the Project is opened on a system with different DPIs the WinForms get scaled, which is painfully bad.

Solution Summary

  • Set in your Forms Designer: Font = MS Sans; 11px
  • In the Forms Ctor, after InitializeComponent, set: Font = SystemFonts.DefaultFont
  • Enable DPI-Awareness, either through a manifest or by API function SetProcessDPIAwareness

Solution Details

mmconf_blurless
Make your WinForms GUI less blurry by being DPI aware

A modern Windows approach to High-DPI Displays

More and more Computers get equipped with High-DPI Displays, a trend I like very much. Pixels cannot be small enough. With modern Windows, however, GUI of older Applications get infamously blurry. This is due to Microsoft’s approach for backward-compatibility: applications will be rendered at their native DPI and scaled up to the system’s DPI. This way, the application does not need to know anything about DPI, but user controls will keep a decent size on any setting. While most people hate the blurriness of old GUIs on modern Windows, this approach does make a lot of sense:

  • Backward-Compatibility is instantly given, as nothing changes for the old Applications.
  • The users input experience is retained, as the GUI will keep its apparent size. (Just thing, if you’re old enough, of the original GUI of Winamp, where the buttons in the default skin were sometimes just a few Pixels in size.)
  • (last but not least) for new Applications, Developers hopefully get upset with the blurry look, that they actually invest some time, to do it right!

So, don’t claim it’s all Microsoft’s fault that the GUI of your applications look blurry. Truth is, you were just too lazy to do it right.

Why WinForms?

If you browse the internet in search for how to handle high-DPI settings with WinForms, you are bound to stumble upon a smart-ass telling you to switch to WPF. That person does have a point: WPF is designed to be a GUI for all resolutions. But, that person is also an idiot. Don’t bother discussing.

If you decided to use WinForms, then use WinForms. It is not deprecated, it is not legacy, it is not broken. There are good reasons to use WinForms. One, for example, would be the nice compatibility with Mono (Linux and MacOS). Another one would be compatibility with native GUI controls. Whatever your reason is, don’t let other people easily throw you off track.

If you’re not fixed on WinForms, but want to write a new Application for Windows, then have a look at WPF.

Why does Visual Studio Scale?

Normally Windows works at 96 DPI. That is, you need 96 pixels to fill up one inch of screen space. On a higher setting, let’s say 144 DPI, you need 144 pixels. So, either your GUI elements will look smaller, or your GUI elements must be larger to look the same. Modern graphical content is thus not described in pixels, but often in points (pt). Points are defined as 1/72 inch, that is in screen space, and not in pixels.

WinForms is not a modern GUI. All Elements are designed with pixels. However, higher DPI settings were present in Windows for a long time (accessibility feature). WinForms answers to this by having a mechanism to scale the whole GUI ‘manually’. If a scaling factor different from one is determined, all GUI elements, positions, sizes, etc., are multiplied by that factor, including the overall size of the window. By default, this scaling factor is determined comparing the Font settings of the form. Fonts are usually specified with a size in pt. Windows computes the font size in pixels based on the active DPI value. If WinForms now detects a mismatch between the pixel-based font sizes between design-time specifications and run-time evaluation, the form and all content is scaled. And this is exactly what happens in the Visual Studio Windows Forms designer.

Visual Studio does basically nothing at all. However, the font size evaluation is based on the system’s DPI setting. So on high-DPI systems, the font’s pixel size will be different from the stored design-time pixel size, and thus the whole form will be scaled accordingly. That is a good idea at run-time. I mean, that is the whole point of this mechanism. However, we are still at design time. The problem raises, because the Forms designer in Visual Studio actually runs the WinForms engine. As now all GUI elements change their sizes, the designer is informed (likely by the normal events) and adjusts all generated code to the new sizes and locations. This is, of course, ugly, painful and stupid, if you are going to continue the development on another machine with another, maybe, lower DPI setting.

Disable Scaling at Design-Time, Enable Scaling at Run-Time

What I am writing here is not a premium solution. It is the workaround I found for myself to work best.

The basic idea is to (manually) disable scaling at design time, and to (manually) enable scaling at run time.

I write scaling, but what we actually change is the Font!

Keep the Form’s AutoScaleMode = Font. That setting is correct and is not the problem at all. The problem is the automatically used font. It is the system’s default font, which specifies its size in pt. Again, a good idea at run time. What we change is the Font setting of the Form, to specify the size in pixels.

In the Designer, set the Font to: Microsoft Sans Serif; 11px

Windows default Font is Microsoft San Serif 8 pt. according to MSDN. Actually, it seems more like 8.25 pt. So this is 8.25/72 inch, which finally results in 8.25 * 96/72 = 11 pixel on a normal DPI system. That is why you set the font to this painfully small looking value. It is the right one! Now edit your GUI on all systems you have. Your Forms will not be scaled by Visual Studio any more. So, design time is fixed.

Now for the run time. That one is easy, too. All we need to do is to ‘reset’ the Form’s font to have its size specified in pt. again. The easiest way to do that is to reassign the system’s default font. Just set it in the constructor, right after InitializeComponent:

Font = SystemFonts.DefaultFont;

This, of course, only works if you are on a system where the system’s default font is as expected, and only if you do not change fonts for the controls inside your form. If you did change some control’s font, you specify the font with pixel size for design time and you update these at run time initialization to use pt.-based sizes again.

Scalable GUI Design

And that is that. If your application is already DPI aware, your forms will now scale nicely. That is, if you designed them correctly.

  • You should not mix docking-based and anchor-based design in the same form. That is bound to produce weird scaling issues.
  • You must use either, otherwise the controls will just randomly shift somewhere.
  • Be aware that the control might no longer fit into your Form, due to the maximum window size. Use flowing layout containers and auto scrollbars.

Enable DPI Awareness

All that, of course, only makes any sense if you enable DPI awareness for your application. There are basically two options:

Manifest-Segment I use:

<application xmlns="urn:schemas-microsoft-com:asm.v3">
  <windowsSettings>
    <dpiAware xmlns="http://schemas.microsoft.com/SMI/2005/WindowsSettings">true</dpiAware>
  </windowsSettings>
</application>

Code I use:

[DllImport("SHCore.dll")]
private static extern bool SetProcessDpiAwareness(PROCESS_DPI_AWARENESS awareness);

private enum PROCESS_DPI_AWARENESS {
    Process_DPI_Unaware = 0,
    Process_System_DPI_Aware = 1,
    Process_Per_Monitor_DPI_Aware = 2
}

[DllImport("user32.dll", SetLastError = true)]
static extern bool SetProcessDPIAware();

internal static void TryEnableDPIAware() {
    try {
        SetProcessDpiAwareness(PROCESS_DPI_AWARENESS.Process_Per_Monitor_DPI_Aware);
    } catch {
        try { // fallback, use (simpler) internal function
            SetProcessDPIAware();
        } catch { }
    }
}

Maybe, you know, that modern Windows can be even more complicated by per-monitor DPI settings. The idea is, that attached external screens have different sizes and resolutions, and should thus be handled with different DPIs. The good news is: this approach here works instantly with per-monitor DPI. When the form is dragged onto another monitor, the system automatically adjusts the font setting, as the font size is at run time specified in points. This automatically triggers a rescaling of the form. Wheee!

Most new data sets for my scientific visualization find their way to my desk in form of arbitrarily structures text files. This is not really a problem. The first sensible step is converting them into a fast binary format for the visual analysis. With this, however, I face the problem of understanding the structure of 11 Gigabytes text files (no exaggeration here!). But, such files do have structure. So, only the few first and few last lines really matter. The bits in-between will be roughly the same way. What I need are the Linux-known commands “head” and “tail”. However, I am a Windows guy. So? The Powershell comes to the rescue:

gc log.txt | select -first 10 # head
gc log.txt | select -last 10 # tail

I found these on: http://stackoverflow.com/a/9682594 (where else)

At least the “head” version was fast and sufficient for me. I am happy.

The Windows Powershell is quite nice. Of course, now all the Linux users start bitching around, that they always has something like this and that it is nothing special. And no one claims otherwise. But still, the PowerShell is nice and I enjoy it. :-)

Like today: I needed a simple hex dump of a file:

PS >  $str = ""; $cnt = 0; get-content -encoding byte C:\pfad\zur\Datei.txt | foreach-object { $str += (" {0:x2}" -f $_); if ($cnt++ -eq 7) { $cnt = 0; $str += "`n"; } }; write-host $str

It’s elegance is limited, that I will admit, however, I does it’s job. And, somehow, it is quite nice …