Today I present you the Checkouts Overview tool.

https://github.com/sgrottel/checkouts-overview
https://go.grottel.net/checkouts-overview

What? Why? Because this little tool helps me.

In my private setup, I have a lot of smaller repos checked out, and work on them only occasionally. In addition, I got several repos to collect the history of some text documents. Some of those repos are synced against servers which are only occasionally online, partially for power saving or partially due to VPN and network connectivity stuff. As a result, I often keep losing track of the sync states of all the different repos.

Is everything checked in? — most times, yes. If the change was complete.

Is everything pushed? — maybe.

Am I on branches? — no idea.

You might not need this app, if you have a better structured work process with your stuff than I do. I don’t, so I need help by a tool, by this tool.

If you are interested, you find more info in the app’s github repository.

Note: and as for the app’s icon, it’s about (repository) clones, right.

Yes, I am still using AntTweakBar. As you might know, the development of AntTweakBar is discontinued. At some point in the future, I will switch. Currently, I consider imgui the best successor. But I haven’t had time to look into imgui. So, when I resurrected an old small tool of mine, it still used ATB, and I did not want to recode all of this. But out of “because-I-can,” I decided  to update all dependencies to their newest versions. As a result the ATB integration with GLFW 3 did not work any longer. A couple of callback functions where changed between GLFW 2 and GLFW 3. I ended up rewriting my glue code between those two libraries.

Here it is, if any of you ever come across the same issue. First the callbacks:

static void keyCallback(GLFWwindow* window, int key, int scancode, int action, int mods)
{
#ifdef HAS_ANTTWEAK_BAR
  if (action == GLFW_PRESS || action == GLFW_REPEAT)
  {
    int twMod = 0;
    bool ctrl;
    if (mods & GLFW_MOD_SHIFT) twMod |= TW_KMOD_SHIFT;
    if (ctrl = (mods & GLFW_MOD_CONTROL)) twMod |= TW_KMOD_CTRL;
    if (mods & GLFW_MOD_ALT) twMod |= TW_KMOD_ALT;

    int twKey = 0;
    switch (key)
    {
    case GLFW_KEY_BACKSPACE: twKey = TW_KEY_BACKSPACE; break;
    case GLFW_KEY_TAB: twKey = TW_KEY_TAB; break;
    //case GLFW_KEY_???: twKey = TW_KEY_CLEAR; break;
    case GLFW_KEY_ENTER: twKey = TW_KEY_RETURN; break;
    case GLFW_KEY_PAUSE: twKey = TW_KEY_PAUSE; break;
    case GLFW_KEY_ESCAPE: twKey = TW_KEY_ESCAPE; break;
    case GLFW_KEY_SPACE: twKey = TW_KEY_SPACE; break;
    case GLFW_KEY_DELETE: twKey = TW_KEY_DELETE; break;
    case GLFW_KEY_UP: twKey = TW_KEY_UP; break;
    case GLFW_KEY_DOWN: twKey = TW_KEY_DOWN; break;
    case GLFW_KEY_RIGHT: twKey = TW_KEY_RIGHT; break;
    case GLFW_KEY_LEFT: twKey = TW_KEY_LEFT; break;
    case GLFW_KEY_INSERT: twKey = TW_KEY_INSERT; break;
    case GLFW_KEY_HOME: twKey = TW_KEY_HOME; break;
    case GLFW_KEY_END: twKey = TW_KEY_END; break;
    case GLFW_KEY_PAGE_UP: twKey = TW_KEY_PAGE_UP; break;
    case GLFW_KEY_PAGE_DOWN: twKey = TW_KEY_PAGE_DOWN; break;
    case GLFW_KEY_F1: twKey = TW_KEY_F1; break;
    case GLFW_KEY_F2: twKey = TW_KEY_F2; break;
    case GLFW_KEY_F3: twKey = TW_KEY_F3; break;
    case GLFW_KEY_F4: twKey = TW_KEY_F4; break;
    case GLFW_KEY_F5: twKey = TW_KEY_F5; break;
    case GLFW_KEY_F6: twKey = TW_KEY_F6; break;
    case GLFW_KEY_F7: twKey = TW_KEY_F7; break;
    case GLFW_KEY_F8: twKey = TW_KEY_F8; break;
    case GLFW_KEY_F9: twKey = TW_KEY_F9; break;
    case GLFW_KEY_F10: twKey = TW_KEY_F10; break;
    case GLFW_KEY_F11: twKey = TW_KEY_F11; break;
    case GLFW_KEY_F12: twKey = TW_KEY_F12; break;
    case GLFW_KEY_F13: twKey = TW_KEY_F13; break;
    case GLFW_KEY_F14: twKey = TW_KEY_F14; break;
    case GLFW_KEY_F15: twKey = TW_KEY_F15; break;
    }
    if (twKey == 0 && ctrl && key < 128)
    {
      twKey = key;
    }
    if (twKey != 0)
    {
      if (::TwKeyPressed(twKey, twMod)) return;
    }
  }
#endif
}

static void charCallback(GLFWwindow* window, unsigned int key)
{
#ifdef HAS_ANTTWEAK_BAR
  if (::TwKeyPressed(key, 0)) return;
#endif
}

static void mousebuttonCallback(GLFWwindow* window, int button, int action, int mods)
{
#ifdef HAS_ANTTWEAK_BAR
  if (::TwEventMouseButtonGLFW(button, action)) return;
#endif
}

static void mousePosCallback(GLFWwindow* window, double xpos, double ypos)
{
#ifdef HAS_ANTTWEAK_BAR
  if (::TwEventMousePosGLFW((int)xpos, (int)ypos)) return;
#endif
}

static void mouseScrollCallback(GLFWwindow* window, double xoffset, double yoffset)
{
#ifdef HAS_ANTTWEAK_BAR
  static double pos = 0;
  pos += yoffset;
  if (::TwEventMouseWheelGLFW((int)pos)) return;
#endif
}

static void resizeCallback(GLFWwindow* window, int width, int height)
{
#ifdef HAS_ANTTWEAK_BAR
  ::TwWindowSize(width, height);
#endif
}

Of course, you can omit the #ifdefs if you don’t care. Add your own codes to the functions after ATB has been handled.

Then, it’s just your typical initialization of GLFW callbacks:

::glfwSetKeyCallback(window, keyCallback);
::glfwSetCharCallback(window, charCallback);
::glfwSetMouseButtonCallback(window, mousebuttonCallback);
::glfwSetCursorPosCallback(window, mousePosCallback);
::glfwSetScrollCallback(window, mouseScrollCallback);
::glfwSetWindowSizeCallback(window, resizeCallback);

Some while ago, two of my colleagues were putting effort into our main code base and build system, to migrate to Visual Studio 2017, and the C++17 standard. Admirable and sensible. Of course, that was reason enough for another colleague of mine and myself to joke around about downgrading our code base to C++03 or C++98 or maybe even downright to C. Don’t worry, we all four were laughing. (Or were we?)

At that time, my joke-buddy pointed me to a blog post by aras-p about Modern C++ Lamentations. Read it! It’s worth it. And don’t go, “that’s maybe in gaming industry. Doesn’t apply to my work.” Well, I am not working in gaming industry. You know what: It does apply to my work pretty much 100%.

In my opinion, “modern” C++ is too complex, too bloated, too much of a poser for “look I can do cool code”, and misses the point of solving problems.

[…] to me this feels like someone decided that “Perl is clearly too readable, but Brainfuck is too unreadable, let’s aim for somewhere in the middle”.

Many language features are valid, other as just “cool.” Now, of course, I understand, that different people will find different parts of the language good. There are some aspects, however, which are objectively bad. Look at compile times and debug times mentioned in this article. At least those make a very valid point.

C++ compilation times have been a source of pain in every non-trivial-size codebase I’ve worked on. […] Yet it feels like the C++ community at large pretends that is not an issue, with each revision of the language putting even more stuff into header files, and even more stuff into templated code that has to live in header files.

I have been a hobby programmer in school; was a part time programmer while being a student of software engineering; made my Ph.D. in computer science on computer graphics and visualization, while writing a large-scale modular, high-performance visualization software; worked as senior software developer in a company; and I am now manager of a team of software engineers. I think it is valid to say, I have been programming almost my whole life. I still try to do some minor improvements or bug fixes, even as a manager. Most likely me team is thinking I should stop messing in “their code.” I won’t. My point is:

I have been programming almost my whole life. And I did it in more than a dozen different programming languages. (While writing this I counted 15, not including scripting languages. But most likely I forgot some.) Given this experience, let me say this:

C++ is not the best programming language. In modern C++ not everything has improved.

Please! Start (again) thinking “How do I solve this problem,” and not “How do I solve this problem with variadic templates wrapped in lambdas with ranges because they are so cool.”

While I was lecturing at the university on C++ for computer graphics, clear as daylight, you can see the different types of uprising programmers. And there is this specific sub-type of “programming artists.” Programmers, who think their source code is art and above and beyond trivial programs others do. I will not comment on those any further. But I noticed, in the field where C++ is used, especially so-called modern C++, those guys are seen pretty often! Sad.

As a closing note: Nowadays, when I start a project and think about which programming language(s) to use, C++ is not on the top of the list anymore.

One of my old computer science professors, back in the days, used to say, if you use a debugger while you are writing your code, you are a bad programmer. My, oh my. What an idiot. It makes perfect sense to utilize a debugger as you proceed in completing your program. It’s a simple variant of divide and conquer. Let’s make sure one part does work, before we move on to the next. So, you see, I really value my debugger.

Many small tools I write are simple console applications. I do like graphical user interfaces a lot. And I prefer a graphical user interface over a command line interface any time. But for some small tools, especially ones which do not even require any interaction, setting up a graphical user interface is just too much work. So, even so it really is very old-school, console applications are often the right choice.

This brings us to developing console applications and utilizing the debugger. Visual Studio has a very nice feature for this scenario, when working with c++: it keeps the console window open and reuses it. At first this might seem useless. I know a lot of people which just close the window every time their application stops. But there is a clear and huge benefit from this function: since the console window stays open, you can inspect you application’s output for as long as you like without having to keep the debugger attached or starting your application in a separate console. This actually is really handy.

For csharp console applications, however, this feature does not exist. I really do not know why. And, I hope the Microsoft will deliver this feature soon for csharp applications as well. But for now, csharp has this horrible behavior that the console window closes as soon as the application exits. And this brings us back into the past, where we need some mechanism to keep the window open. One possibility is to utilize the debugger, which is attached anyway, to pause the application. I don’t want to do this using “normal” break points, as I use break points to do actual debugging. Meaning, I often delete all break points, and then only set those I need. Having to take care for some “special” break points would be a pain in the … well, you know.

Luckily, we can break the debugger by code. Whipping up some utility class, I got this here:

static class DebugHelper
{
  [Conditional("DEBUG"), MethodImpl(MethodImplOptions.AggressiveInlining), DebuggerHidden]
  static public void Break()
  {
    bool launch;
    var env = Environment.GetEnvironmentVariable("LAUNCH_DEBUGGER_IF_NOT_ATTACHED");
    if (!bool.TryParse(env, out launch))
      launch = false;
    if (launch || Debugger.IsAttached)
    {
      if (Debugger.IsAttached || Debugger.Launch())
        Debugger.Break();
    }
  }
}

Now, I can just call DebugHelper.Break(); anywhere I like.

They are removed in release builds. And the aggressive optimization removes the helper function from the stack, so that the debugger always breaks at the call of my helper function, and not within.

For now, this is handy. And, I really hope, that in the near future this will be obsolete.

Previously, I wrote about using one global msbuild xml file to override nuget package content for local development. While this does work, it comes with a warning if multiple packages use this mechanism:

***Test\packages\***.0.7.1-prerelease-\build\native\***.targets(7,5):
warning MSB4011: "***Test\packagesoverride.xml.user" cannot be imported again. It was already imported at "***Test\packages\***.0.7.1-prerelease-\build\native\***.targets (6,3)".
This is most likely a build authoring error. This subsequent import will be ignored. [***Test\***Test.vcxproj]

While this is not realy a problem, it is a warning. And I don’t like warning. I like my projects to build entirly without warnings.

A soltion for this comes from classic c++ programming: use an include guard. These are the changes required:

The packagesoverride.xml.user must define a default variable. I named it HAS_packagesoverride:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <HAS_packagesoverride>True</HAS_packagesoverride>
    <NugetDevPackageTest_testLib_DevDir>C:\Dev\SomeProject\Dir</NugetDevPackageTest_testLib_DevDir>
  </PropertyGroup>
</Project>

And now importing this xml in the nuget packages’ target files can this for this variable to avoid multiple import:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" InitialTargets="FaroMorfCopySymbols">

  <!-- Import override settings, if they exist -->
  <ImportGroup>
    <Import
      Condition="Exists('$(SolutionDir)packagesoverride.xml.user') and '$(HAS_packagesoverride)' != 'True'"
      Project="$(SolutionDir)packagesoverride.xml.user" />
  </ImportGroup>

<!-- ... -->

 

Previously, I wrote about using NuGet for software components, which are still in active development. One of the most important factors was the capability to override the nuget package’s content with content fetched from a directory, e.g., a working copy clone. The key element for this was a MSBuild variable NugetDevPackageTest_testLib_DevDir.

The original plan was to edit this variable using a project property page. While having a UI is nice, this has proven not to work on larger projects. The reason is simple: in larger projects, we talk about a vs solution with multiple vc projects, and many of these projects might reference our NuGet package. If we now need to switch to our local directory, we need to adjust the project properties for every project consuming the package. This is tiring and error prone. Forget to adjust just one project and you might end up with inconsistent builds. Therefore, I was seeking a more centralized configuration.

Update 2019-03-02

I updated the code examples to reflect the updates I recently came up with.

Dev. Override – II

The principle idea of having a variable to control the override remains valid. The targets in your nuget might look like this:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <!-- ... -->

  <!-- Compiler settings: defines and includes -->
  <ItemDefinitionGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' == ''">
    <ClCompile>
      <PreprocessorDefinitions>HAS_NUGETDEVPACKAGETEST_TESTLIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
      <AdditionalIncludeDirectories>$(MSBuildThisFileDirectory)include\;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
    </ClCompile>
  </ItemDefinitionGroup>
  <ItemDefinitionGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' != ''">
    <ClCompile>
      <PreprocessorDefinitions>HAS_NUGETDEVPACKAGETEST_TESTLIB;HAS_NUGETDEVPACKAGETEST_TESTLIB_DEVDIR;%(PreprocessorDefinitions)</PreprocessorDefinitions>
      <AdditionalIncludeDirectories>$(NugetDevPackageTest_testLib_DevDir)\Project\include\;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
    </ClCompile>
  </ItemDefinitionGroup>

  <!-- ... -->

</Project>

The obvious question is the definition of our DevDir variable.

For this is propose a central msbuild xml at the level of the vs solution!

We include it in our targets file, right after the root Project tag starts:

<ImportGroup>
  <Import Project="$(SolutionDir)packagesoverride.xml.user" Condition="Exists('$(SolutionDir)packagesoverride.xml.user') and '$(HAS_packagesoverride)' != 'True'" />
</ImportGroup>

This line imports the msbuild xml file, if it exists. Notice, how the file name is generic and not related to our specific package. This is because multiple nuget packages can share this file!

The content of this central configuration is very simple:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <HAS_packagesoverride>True</HAS_packagesoverride>
    <NugetDevPackageTest_testLib_DevDir>C:\Dev\SomeProject\Dir</NugetDevPackageTest_testLib_DevDir>
  </PropertyGroup>
</Project>

Now if you need to override a nuget package for your whole solution, just create this file!

Drawback 1: If the file did not exist previously, an you create it, then you need to rebuild all projects with nugets referencing this package. Because they might be affected by this file.

However, once the file does exist, changes to the file are currectly and automatically detected by visual studio, and build operations are correctly triggered in the affected projects. So, it might be a nice idea to keep a file with an empty property group in place, just in case.

If you need to override multiple nugets at once, just add multiple entries into this one property group.

Drawback 2: There is no UI. So you need to edit it in your favorite text editor, meaning, you are prone to all typing errors you can come up with.

All in all, I believe this central file for the nuget override configuration is an improvement.

HtmlAgilityPackAs can be read on the internet: HtmlAgilityPack is not for beautiful, aka human readable, html files.

“[…] it’s a ‘by design’ choice.” [https://stackoverflow.com/a/5969074]

So everyone redirects you to some other library.

Now, I am a bit stubborn. I want to use HtmlAgilityPack and I want to have indented, human-readable html files. The magic is within text nodes in the DOM. So, I wrote two utility functions to help me out.

First, to get rid of all unwanted whitespaces. This one might be a bit aggressiv, but it was ok for me:

static private void removeWhitespace(HtmlNode node) {
  foreach (HtmlNode n in node.ChildNodes.ToArray()) {
    if (n.NodeType == HtmlNodeType.Text) {
      if (string.IsNullOrWhiteSpace(n.InnerHtml)) {
        node.RemoveChild(n);
      }
    } else removeWhitespace(n);
  }
}

And, second, to create white spaces for line breaks and indentions:

internal static void beautify(HtmlDocument doc) {
  foreach (var topNode in doc.DocumentNode.ChildNodes.ToArray()) {
    switch (topNode.NodeType) {
      case HtmlNodeType.Comment: {
          HtmlCommentNode cn = (HtmlCommentNode)topNode;
          if (string.IsNullOrEmpty(cn.Comment)) continue;
          if (!cn.Comment.EndsWith("\n")) cn.Comment += "\n";
        } break;
      case HtmlNodeType.Element: {
          beautify(topNode, 0);
          topNode.AppendChild(doc.CreateTextNode("\n"));
          //doc.DocumentNode.InsertAfter(doc.CreateTextNode("\n"), topNode);
        } break;
      case HtmlNodeType.Text:
        break;
      default:
        break;
    }
  }
}

private static bool beautify(HtmlNode node, int level) {
  if (!node.HasChildNodes) return false;

  var children = node.ChildNodes.ToArray();
  bool onlyText = true;
  foreach (var c in children) {
    if (c.NodeType != HtmlNodeType.Text) onlyText = false;
  }
  if (onlyText) return false;

  string nli = "\n" + new string('\t', level);

  foreach (var c in children) {
    node.InsertBefore(node.OwnerDocument.CreateTextNode(nli), c);
    if (c.NodeType == HtmlNodeType.Element) {
      if (c.HasChildNodes) {
        if (beautify(c, level + 1)) {
          c.AppendChild(c.OwnerDocument.CreateTextNode(nli));
        }
      }
    }
  }
  return true;
}

As you might see, the code is pretty hacky. But, it works for me. Maybe, it also works for you, or it can be a starting point.

This article discusses a way how to create and use a NuGet Package for Libraries which are not yet stable and which are still under active development.

Especially, the core idea is to develop such an library and a project consuming this library in an efficient way.

Concept

As we develop on Windows with Visual Studio, NuGet is a powerful tool to manage library dependencies in Projects, for external 3rd Party libraries, as well as for our own libraries. Using Git our alternatives would be:

  • Git Submodules – with their known problems (detached head, multiple commits, etc.)
  • cmake – vs. Visual Studio solutions
  • manual paths / zip packages – manual work

Using NuGet seems the way to go for Windows-Only projects. Even for native c++ components. (Side note: coapp is dead and is not missed.)

The Problem

However: if we have a library component and a consuming project, which we have to actively develop both at the same time, the NuGet process gets prohibitively tiring. For each change in the library, we need to build the lib, repack the NuGet package, increasing its (build) version number, update the consuming project’s NuGet reference, and build the project… For every tiny typo.

The Idea

The core idea is to prepare the NuGet package in such a way, that a local override of the package’s content is possible. Think of an environment variable holding the file system path to the library’s development directory. If this environment variable is not set, the content of the NuGet package is use. If the environment variable is set, then include paths and library paths are set to directly use the files found in that development directory (obviously, resulting in an unclean in-development version of the project).

Since native NuGet packages simply use an MSBuild script to integrate themselves into a project, we have (almost) all the possibilities MSBuild has to offer.

Set up the NuGet Package

Creating a native C++ library NuGet package without coapp is not very difficult, but out-of-focus of this article. For the sake of simplicity I only discuss the (second) easiest task: a static library. We need some header file and some static library files. All in all the whole file looks like this:

NugetDevPackageTest.testLib.nuspec

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
  <!--
    Create package:
    cd solution directory
    nuget pack NugetDevPackageTest.testLib.nuspec -Version 0.2.0-build001
    -->

  <!-- package metadata -->
  <metadata>
    <id>NugetDevPackageTest.testLib</id>
    <version>$version$</version>
    <title>NugetDevPackageTest testLib</title>
    <authors>Sebastian Grottel</authors>
    <projectUrl>https://bitbucket.org/sgr_faro/nuget-dev-package-test/overview</projectUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Nuget package of `someLib` to test dev nuget package concept</description>
    <tags>nugetdevpackagetest testLib native</tags>
  </metadata>

  <!-- content -->
  <files>
    <!-- include header files -->
    <file src="DemoClass.h" target="build\native\include" />

    <!-- static libraries -->
    <file src="lib\Win32\Debug\some.lib" target="build\native\lib\Win32\Debug" />
    <file src="lib\Win32\Release\some.lib" target="build\native\lib\Win32\Release" />
    <file src="lib\x64\Debug\some.lib" target="build\native\lib\x64\Debug" />
    <file src="lib\x64\Release\some.lib" target="build\native\lib\x64\Release" />

    <!-- build rules -->
    <file src="NugetDevPackageTest.testLib.targets" target="build\native" />

  </files>
</package>

There are two interesting aspects to highlight here:

  1. The static libraries can be collected from any directories. Important is that you specify the correct target directories. Follow the semantic rules NuGet specifies.
  2. The first build rule file, NugetDevPackageTest.testLib.targets, controls the integration of the NuGet Package’s content into the Visual Studio project. This file must have the same name as the package itself.

We set up the targets file as follows:

NugetDevPackageTest.testLib.targets

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <!-- If development directory is empty, use content from the package -->
  <PropertyGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' == ''">
    <__NugetDevPackageTest_testLib_IncludeDir>$(MSBuildThisFileDirectory)include/</__NugetDevPackageTest_testLib_IncludeDir>
    <__NugetDevPackageTest_testLib_LibrarayPath>$(MSBuildThisFileDirectory)lib\$(Platform)\$(Configuration)\some.lib</__NugetDevPackageTest_testLib_LibrarayPath>
  </PropertyGroup>

  <!-- If development directory is set, use the content found there -->
  <PropertyGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' != ''">
    <__NugetDevPackageTest_testLib_IncludeDir>$(NugetDevPackageTest_testLib_DevDir)/</__NugetDevPackageTest_testLib_IncludeDir>
    <__NugetDevPackageTest_testLib_LibrarayPath>$(NugetDevPackageTest_testLib_DevDir)\lib\$(Platform)\$(Configuration)\some.lib</__NugetDevPackageTest_testLib_LibrarayPath>
  </PropertyGroup>

  <!-- If development directory is set, define an additional macro -->
  <ItemDefinitionGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' != ''">
    <ClCompile>
      <PreprocessorDefinitions>HAS_NUGETDEVPACKAGETEST_TESTLIB_DEVDIR;%(PreprocessorDefinitions)</PreprocessorDefinitions>
    </ClCompile>
  </ItemDefinitionGroup>

  <!-- Compiler options: package macro and include paths -->
  <ItemDefinitionGroup>
    <ClCompile>
      <PreprocessorDefinitions>HAS_NUGETDEVPACKAGETEST_TESTLIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
      <AdditionalIncludeDirectories>$(__NugetDevPackageTest_testLib_IncludeDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
    </ClCompile>
    <ResourceCompile>
      <AdditionalIncludeDirectories>$(__NugetDevPackageTest_testLib_IncludeDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
    </ResourceCompile>
  </ItemDefinitionGroup>

  <!-- Linker options: library (full) path -->
  <ItemDefinitionGroup>
    <Link>
      <AdditionalDependencies>$(__NugetDevPackageTest_testLib_LibrarayPath);%(AdditionalDependencies)</AdditionalDependencies>
    </Link>
  </ItemDefinitionGroup>

</Project>

Create the NuGet package:

nuget pack NugetDevPackageTest.testLib.nuspec -Version 0.2.0-build002

(obviously, you adjust the version number as you like to.) and be happy that it already does work as you intended: if the environment variable NugetDevPackageTest_testLib_DevDir is set, the content of this folder is used instead of the NuGet package’s content.

Setting an environment variable and the need to restart Visual Studio any time you change it, this is obviously not great. We can improve on that by adding a GUI.

Visual Studio Property Page

We add an Visual Studio property page for your project to the NuGet package, allowing us to change the setting of the variable from within Visual Studio. For this we add an XML description of the values we want to change:

overrideSettings.xml

<?xml version="1.0" encoding="utf-8"?>
<ProjectSchemaDefinitions xmlns="clr-namespace:Microsoft.Build.Framework.XamlTypes;assembly=Microsoft.Build.Framework">

  <!-- Project property settings to override content of nuget development packages -->
  <Rule Name="ProjectSettings_NugetDevOverride" PageTemplate="tool" DisplayName="Nuget Development Override" SwitchPrefix="/" Order="1">

    <!-- Category to identify this package -->
    <Rule.Categories>
      <Category Name="NugetDevPackageTest_testLib" DisplayName="NugetDevPackageTest.testLib" />
    </Rule.Categories>

    <!-- Store override settings only in the user file -->
    <Rule.DataSource>
      <DataSource Persistence="UserFile" ItemType="" />
    </Rule.DataSource>

    <!-- Override development path -->
    <StringProperty Name="NugetDevPackageTest_testLib_DevDir"
      DisplayName="Development Directory"
      Subtype="folder"
      Description="Override content of Nuget package with content of this directory. If this field is empty (default) the content of the nuget package is used. This value is persistent as user setting only."
      Category="NugetDevPackageTest_testLib" />

  </Rule>
</ProjectSchemaDefinitions>

Note that the StringProperty directly stores it’s value in the variable we are already using. Make sure not to use this variable as an Environment variable anymore!

Now, we reference this property page from the other files of our NuGet package:

NugetDevPackageTest.testLib.nuspec

...
    <!-- build rules -->
    <file src="NugetDevPackageTest.testLib.targets" target="build\native" />
    <file src="overrideSettings.xml" target="build\native" />

  </files>
</package>

 

NugetDevPackageTest.testLib.targets

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <!-- include the gui property page to override the development directory -->
  <ItemGroup>
    <PropertyPageSchema Include="$(MSBuildThisFileDirectory)\overrideSettings.xml" />
  </ItemGroup>

  <!-- If development directory is empty, use content from the package -->
...

Now create the new version of the package. Update the package reference in your consuming project. You might need to restart visual studio once. And you should find the new property page in the settings of your project.

Use the NuGet Package or the local Version

Be Aware of all the pitfalls

  • Never commit the .user file of your projects to the repository!
  • If your project code requires local changes within the code usually packed into NuGet, your project will not compile correctly on any other machine!
    • Before you push, or at least before you create a pull request, you need to push and publish the changed NuGet project and you need to reference the NuGet package cleanly without override.
  • You need to use two Visual Studios running, one for your project and one for the NuGet-ted project.
    • If you run your project in the debugger, and you step into functions from the NuGet project, the corresponding source file is opened in the Visual Studio of your project. You can edit the files there, but of course you cannot compile the NuGet library here. Remember to switch to the right Visual Studio for that.

The Proposed Process

testApp – Visual Studio #1 testLib – Visual Studio #2 Comment
Clone testApp
Open testAppsolution in Visual Studio #1
Build testApp This fetches the NuGet package of testLib and builds the App with the content of that package, e.g. version 1.0.0
Clone testLib
Open testLib solution in Visual Studio #2
Build testLib
(Build testApp) Nothing changes. Still the NuGet package is used. Build should not do anything (up to date)
Open Project settings and override the nuget settings. Reference the directory where you cloned and built testLib
(Build testApp) Known Issue: Visual Studio still believes the project would be up to date
Rebuild testApp App is now built with the testLib built in your local directory!
(Build testApp) The project is up to date (which is correct)
Change some code and build testLib
Build testApp The project correctly detects that it needs to build (at least to link) again. The changes of the local testLib can directly be used.
You start testApp in the Debugger You can even enter code of testLib, look at local / private variables, etc. The corresponding source files of testLib are opened in Visual Studio #1!
You edit source of testLib WRONG Visual Studio!
You hit Build → testApp is up to date It is the wrong Visual Studio. You cannot build testLib there. You need to switch to the other Visual Studio
Visual Studio might detect the changes at the source files, if they are opened. If the files are not opened, everything is ok. If Visual Studio does ask you, select to keep changes made outside Visual Studio #2
Hit F5 (start Debugging) Binary and source code do not match! Visual Studio #1 should warn you about it if you try, e.g., to set a breakpoint in source code of testLib
Build testLib The testLib is update with the changes to the source code you made in Visual Studio #1
Hit F5 (start Debugging) Visual Studio #1 correctly detects the updates of testLib, builds again, and executes testApp with the newest version of testLib.
 

 

 

Develop your changes in testApp and testLib. Now you are finished and want to push a stable version to your CI system

Be sure testLib is correct in all configurations
Commit + Push testLib
Trigger built / update of new NuGet package (we call it Ver++)
Disable the override settings for the NuGet package
Update the testApp project to use the NuGet package Ver++ Ideally this package should be the result of your CI system.
Compile and testApp in all configurations Since you had the identical code tested before locally, this should not show any problems. If it does, you likely missed something before.
Commit + Push your final update of testApp This should be solely the increment of the NuGet package version
Build / Test your testApp on your CI system All should work fine.

Known Issues

Visual Studio Property Pages GUI

The Visual Studio GUI is sometimes not correctly updated. One such situation is created by the property page descriptions injected via NuGet packages.

If you initially add a NuGet package with such a property page, or if the NuGet package you are using changed its property page, Visual Studio might not show you the right proptery page (or might not show any property page).

If this happens, restart Visual Studio after installing / restoring the NuGet package and you should be fine.

Visual Studio’s Fast Up to Date Check misses Changes of the StringProperty

When in Visual Studio a build is triggered (e.g. by pressing F5 for a Debug start), Visual Studio does not invoke MSBuild to determine if the project is up to date. Instead is uses an internal mechanism to quickly decide if a rebuild is required. This internal mechanism, known as fast up to date check, does not perform all required checks. E.g., it misses changed to the paths defined by the variable defined within the property page GUI created by our NuGet package.

This means, whenever the value of this variable is changed, you should trigger a rebuild of the consuming project.

One workaround would be to disable the fast up to date check for this project. However, for large projects, like SCENE, this introduces more pain than the required rebuild.

cf.: https://stackoverflow.com/q/47516374/552373

Version Schizophrenia

There is an semantic problem: you reference a Nuget package version, e.g. 1.0.0. But you override the content with changed files. Those will eventually be published as a new version, e.g. 1.0.1. But until then, your consuming project will reference those as version 1.0.0.

Your build process must be aware of this issue.

You should introduce integration tests to make sure such information mismatch never enters release branches.

Deep Dependency Graph Version Conflict

Consuming the unstable library from the development directory, is a project setting, not a solution setting. If you have a deep dependency graph and you are consuming this library via multiple routes, the override setting only influences one single route! Especially, if you use another Nuget package which uses your lib nuget, you cannot do anything about it. You need to be carefully aware of potential binary compatibility issues in such a construct.

Workarounds could be based compile-time and run-time checks of version numbers and the “dirty-lib-flag” (cf. HAS_NUGETDEVPACKAGETEST_TESTLIB_DEVDIR).

And now there is Voro++ on Nuget. Voro++ is a library and utility, written by Chris Rycroft (with Applied Mathematics in Harvard SEAS) for compute Voronoi cells für 3D data. I use his lib actually to compute natural neighbors in my particle data sets. For me, Voro++ is better suited than other libs, like e.g. qhull, for several reasons: it is easy to use, streamlined for 3D, and natively support periodic boundary conditions. The single drawback I found, there are no official binary releases for Windows. And that is why I started this nuget package.

The library’s source code is rather clean and compiles without problem in Visual Studio. So all I did, was compiling the static library for the four latest Visual C++ versions, and I put the nuget package together. My Visual Studio solution and project files are also publicly available on bitbucket: https://bitbucket.org/sgrottel_nuget/voro

The package contains the static library, the header files, some files from the documentation, and the command line tool built with MSVC 2015. If you only want the command line tool and don’t care for the lib, simply download the nuget package, unzip it (yes, nuget packages are basically zip files), and you’ll find the exe in the tools subdirectory. For the full documentation of the library, go the Chris’ website or the original source code package.

What I didn’t include in the nuget package are any samples. But compiling them for yourself is as simple as it could be:

1. start a new C++ console application in visual studio.

Be sure to deactivate precompiled headers and to make the project empty.

voromsvc01

2. drag-drop-add any cc-example.

Just use any file from Chris’ source code package into the project.

voromsvc02

3. Add the voroplusplus nuget package.

voromsvc03

4. And you’re done! Compile it and run it by a push of a button.

Have fun!