Yes, I am still using AntTweakBar. As you might know, the development of AntTweakBar is discontinued. At some point in the future, I will switch. Currently, I consider imgui the best successor. But I haven’t had time to look into imgui. So, when I resurrected an old small tool of mine, it still used ATB, and I did not want to recode all of this. But out of “because-I-can,” I decided  to update all dependencies to their newest versions. As a result the ATB integration with GLFW 3 did not work any longer. A couple of callback functions where changed between GLFW 2 and GLFW 3. I ended up rewriting my glue code between those two libraries.

Here it is, if any of you ever come across the same issue. First the callbacks:

static void keyCallback(GLFWwindow* window, int key, int scancode, int action, int mods)
{
#ifdef HAS_ANTTWEAK_BAR
  if (action == GLFW_PRESS || action == GLFW_REPEAT)
  {
    int twMod = 0;
    bool ctrl;
    if (mods & GLFW_MOD_SHIFT) twMod |= TW_KMOD_SHIFT;
    if (ctrl = (mods & GLFW_MOD_CONTROL)) twMod |= TW_KMOD_CTRL;
    if (mods & GLFW_MOD_ALT) twMod |= TW_KMOD_ALT;

    int twKey = 0;
    switch (key)
    {
    case GLFW_KEY_BACKSPACE: twKey = TW_KEY_BACKSPACE; break;
    case GLFW_KEY_TAB: twKey = TW_KEY_TAB; break;
    //case GLFW_KEY_???: twKey = TW_KEY_CLEAR; break;
    case GLFW_KEY_ENTER: twKey = TW_KEY_RETURN; break;
    case GLFW_KEY_PAUSE: twKey = TW_KEY_PAUSE; break;
    case GLFW_KEY_ESCAPE: twKey = TW_KEY_ESCAPE; break;
    case GLFW_KEY_SPACE: twKey = TW_KEY_SPACE; break;
    case GLFW_KEY_DELETE: twKey = TW_KEY_DELETE; break;
    case GLFW_KEY_UP: twKey = TW_KEY_UP; break;
    case GLFW_KEY_DOWN: twKey = TW_KEY_DOWN; break;
    case GLFW_KEY_RIGHT: twKey = TW_KEY_RIGHT; break;
    case GLFW_KEY_LEFT: twKey = TW_KEY_LEFT; break;
    case GLFW_KEY_INSERT: twKey = TW_KEY_INSERT; break;
    case GLFW_KEY_HOME: twKey = TW_KEY_HOME; break;
    case GLFW_KEY_END: twKey = TW_KEY_END; break;
    case GLFW_KEY_PAGE_UP: twKey = TW_KEY_PAGE_UP; break;
    case GLFW_KEY_PAGE_DOWN: twKey = TW_KEY_PAGE_DOWN; break;
    case GLFW_KEY_F1: twKey = TW_KEY_F1; break;
    case GLFW_KEY_F2: twKey = TW_KEY_F2; break;
    case GLFW_KEY_F3: twKey = TW_KEY_F3; break;
    case GLFW_KEY_F4: twKey = TW_KEY_F4; break;
    case GLFW_KEY_F5: twKey = TW_KEY_F5; break;
    case GLFW_KEY_F6: twKey = TW_KEY_F6; break;
    case GLFW_KEY_F7: twKey = TW_KEY_F7; break;
    case GLFW_KEY_F8: twKey = TW_KEY_F8; break;
    case GLFW_KEY_F9: twKey = TW_KEY_F9; break;
    case GLFW_KEY_F10: twKey = TW_KEY_F10; break;
    case GLFW_KEY_F11: twKey = TW_KEY_F11; break;
    case GLFW_KEY_F12: twKey = TW_KEY_F12; break;
    case GLFW_KEY_F13: twKey = TW_KEY_F13; break;
    case GLFW_KEY_F14: twKey = TW_KEY_F14; break;
    case GLFW_KEY_F15: twKey = TW_KEY_F15; break;
    }
    if (twKey == 0 && ctrl && key < 128)
    {
      twKey = key;
    }
    if (twKey != 0)
    {
      if (::TwKeyPressed(twKey, twMod)) return;
    }
  }
#endif
}

static void charCallback(GLFWwindow* window, unsigned int key)
{
#ifdef HAS_ANTTWEAK_BAR
  if (::TwKeyPressed(key, 0)) return;
#endif
}

static void mousebuttonCallback(GLFWwindow* window, int button, int action, int mods)
{
#ifdef HAS_ANTTWEAK_BAR
  if (::TwEventMouseButtonGLFW(button, action)) return;
#endif
}

static void mousePosCallback(GLFWwindow* window, double xpos, double ypos)
{
#ifdef HAS_ANTTWEAK_BAR
  if (::TwEventMousePosGLFW((int)xpos, (int)ypos)) return;
#endif
}

static void mouseScrollCallback(GLFWwindow* window, double xoffset, double yoffset)
{
#ifdef HAS_ANTTWEAK_BAR
  static double pos = 0;
  pos += yoffset;
  if (::TwEventMouseWheelGLFW((int)pos)) return;
#endif
}

static void resizeCallback(GLFWwindow* window, int width, int height)
{
#ifdef HAS_ANTTWEAK_BAR
  ::TwWindowSize(width, height);
#endif
}

Of course, you can omit the #ifdefs if you don’t care. Add your own codes to the functions after ATB has been handled.

Then, it’s just your typical initialization of GLFW callbacks:

::glfwSetKeyCallback(window, keyCallback);
::glfwSetCharCallback(window, charCallback);
::glfwSetMouseButtonCallback(window, mousebuttonCallback);
::glfwSetCursorPosCallback(window, mousePosCallback);
::glfwSetScrollCallback(window, mouseScrollCallback);
::glfwSetWindowSizeCallback(window, resizeCallback);
Tagged with: ,

Some while ago, two of my colleagues were putting effort into our main code base and build system, to migrate to Visual Studio 2017, and the C++17 standard. Admirable and sensible. Of course, that was reason enough for another colleague of mine and myself to joke around about downgrading our code base to C++03 or C++98 or maybe even downright to C. Don’t worry, we all four were laughing. (Or were we?)

At that time, my joke-buddy pointed me to a blog post by aras-p about Modern C++ Lamentations. Read it! It’s worth it. And don’t go, “that’s maybe in gaming industry. Doesn’t apply to my work.” Well, I am not working in gaming industry. You know what: It does apply to my work pretty much 100%.

In my opinion, “modern” C++ is too complex, too bloated, too much of a poser for “look I can do cool code”, and misses the point of solving problems.

[…] to me this feels like someone decided that “Perl is clearly too readable, but Brainfuck is too unreadable, let’s aim for somewhere in the middle”.

Many language features are valid, other as just “cool.” Now, of course, I understand, that different people will find different parts of the language good. There are some aspects, however, which are objectively bad. Look at compile times and debug times mentioned in this article. At least those make a very valid point.

C++ compilation times have been a source of pain in every non-trivial-size codebase I’ve worked on. […] Yet it feels like the C++ community at large pretends that is not an issue, with each revision of the language putting even more stuff into header files, and even more stuff into templated code that has to live in header files.

I have been a hobby programmer in school; was a part time programmer while being a student of software engineering; made my Ph.D. in computer science on computer graphics and visualization, while writing a large-scale modular, high-performance visualization software; worked as senior software developer in a company; and I am now manager of a team of software engineers. I think it is valid to say, I have been programming almost my whole life. I still try to do some minor improvements or bug fixes, even as a manager. Most likely me team is thinking I should stop messing in “their code.” I won’t. My point is:

I have been programming almost my whole life. And I did it in more than a dozen different programming languages. (While writing this I counted 15, not including scripting languages. But most likely I forgot some.) Given this experience, let me say this:

C++ is not the best programming language. In modern C++ not everything has improved.

Please! Start (again) thinking “How do I solve this problem,” and not “How do I solve this problem with variadic templates wrapped in lambdas with ranges because they are so cool.”

While I was lecturing at the university on C++ for computer graphics, clear as daylight, you can see the different types of uprising programmers. And there is this specific sub-type of “programming artists.” Programmers, who think their source code is art and above and beyond trivial programs others do. I will not comment on those any further. But I noticed, in the field where C++ is used, especially so-called modern C++, those guys are seen pretty often! Sad.

As a closing note: Nowadays, when I start a project and think about which programming language(s) to use, C++ is not on the top of the list anymore.

Tagged with: ,

One of my old computer science professors, back in the days, used to say, if you use a debugger while you are writing your code, you are a bad programmer. My, oh my. What an idiot. It makes perfect sense to utilize a debugger as you proceed in completing your program. It’s a simple variant of divide and conquer. Let’s make sure one part does work, before we move on to the next. So, you see, I really value my debugger.

Many small tools I write are simple console applications. I do like graphical user interfaces a lot. And I prefer a graphical user interface over a command line interface any time. But for some small tools, especially ones which do not even require any interaction, setting up a graphical user interface is just too much work. So, even so it really is very old-school, console applications are often the right choice.

This brings us to developing console applications and utilizing the debugger. Visual Studio has a very nice feature for this scenario, when working with c++: it keeps the console window open and reuses it. At first this might seem useless. I know a lot of people which just close the window every time their application stops. But there is a clear and huge benefit from this function: since the console window stays open, you can inspect you application’s output for as long as you like without having to keep the debugger attached or starting your application in a separate console. This actually is really handy.

For csharp console applications, however, this feature does not exist. I really do not know why. And, I hope the Microsoft will deliver this feature soon for csharp applications as well. But for now, csharp has this horrible behavior that the console window closes as soon as the application exits. And this brings us back into the past, where we need some mechanism to keep the window open. One possibility is to utilize the debugger, which is attached anyway, to pause the application. I don’t want to do this using “normal” break points, as I use break points to do actual debugging. Meaning, I often delete all break points, and then only set those I need. Having to take care for some “special” break points would be a pain in the … well, you know.

Luckily, we can break the debugger by code. Whipping up some utility class, I got this here:

static class DebugHelper
{
  [Conditional("DEBUG"), MethodImpl(MethodImplOptions.AggressiveInlining), DebuggerHidden]
  static public void Break()
  {
    bool launch;
    var env = Environment.GetEnvironmentVariable("LAUNCH_DEBUGGER_IF_NOT_ATTACHED");
    if (!bool.TryParse(env, out launch))
      launch = false;
    if (launch || Debugger.IsAttached)
    {
      if (Debugger.IsAttached || Debugger.Launch())
        Debugger.Break();
    }
  }
}

Now, I can just call DebugHelper.Break(); anywhere I like.

They are removed in release builds. And the aggressive optimization removes the helper function from the stack, so that the debugger always breaks at the call of my helper function, and not within.

For now, this is handy. And, I really hope, that in the near future this will be obsolete.

Tagged with:

Previously, I wrote about using one global msbuild xml file to override nuget package content for local development. While this does work, it comes with a warning if multiple packages use this mechanism:

***Test\packages\***.0.7.1-prerelease-\build\native\***.targets(7,5):
warning MSB4011: "***Test\packagesoverride.xml.user" cannot be imported again. It was already imported at "***Test\packages\***.0.7.1-prerelease-\build\native\***.targets (6,3)".
This is most likely a build authoring error. This subsequent import will be ignored. [***Test\***Test.vcxproj]

While this is not realy a problem, it is a warning. And I don’t like warning. I like my projects to build entirly without warnings.

A soltion for this comes from classic c++ programming: use an include guard. These are the changes required:

The packagesoverride.xml.user must define a default variable. I named it HAS_packagesoverride:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <HAS_packagesoverride>True</HAS_packagesoverride>
    <NugetDevPackageTest_testLib_DevDir>C:\Dev\SomeProject\Dir</NugetDevPackageTest_testLib_DevDir>
  </PropertyGroup>
</Project>

And now importing this xml in the nuget packages’ target files can this for this variable to avoid multiple import:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" InitialTargets="FaroMorfCopySymbols">

  <!-- Import override settings, if they exist -->
  <ImportGroup>
    <Import
      Condition="Exists('$(SolutionDir)packagesoverride.xml.user') and '$(HAS_packagesoverride)' != 'True'"
      Project="$(SolutionDir)packagesoverride.xml.user" />
  </ImportGroup>

<!-- ... -->

 

Tagged with: ,

Previously, I wrote about using NuGet for software components, which are still in active development. One of the most important factors was the capability to override the nuget package’s content with content fetched from a directory, e.g., a working copy clone. The key element for this was a MSBuild variable NugetDevPackageTest_testLib_DevDir.

The original plan was to edit this variable using a project property page. While having a UI is nice, this has proven not to work on larger projects. The reason is simple: in larger projects, we talk about a vs solution with multiple vc projects, and many of these projects might reference our NuGet package. If we now need to switch to our local directory, we need to adjust the project properties for every project consuming the package. This is tiring and error prone. Forget to adjust just one project and you might end up with inconsistent builds. Therefore, I was seeking a more centralized configuration.

Update 2019-03-02

I updated the code examples to reflect the updates I recently came up with.

Dev. Override – II

The principle idea of having a variable to control the override remains valid. The targets in your nuget might look like this:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <!-- ... -->

  <!-- Compiler settings: defines and includes -->
  <ItemDefinitionGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' == ''">
    <ClCompile>
      <PreprocessorDefinitions>HAS_NUGETDEVPACKAGETEST_TESTLIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
      <AdditionalIncludeDirectories>$(MSBuildThisFileDirectory)include\;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
    </ClCompile>
  </ItemDefinitionGroup>
  <ItemDefinitionGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' != ''">
    <ClCompile>
      <PreprocessorDefinitions>HAS_NUGETDEVPACKAGETEST_TESTLIB;HAS_NUGETDEVPACKAGETEST_TESTLIB_DEVDIR;%(PreprocessorDefinitions)</PreprocessorDefinitions>
      <AdditionalIncludeDirectories>$(NugetDevPackageTest_testLib_DevDir)\Project\include\;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
    </ClCompile>
  </ItemDefinitionGroup>

  <!-- ... -->

</Project>

The obvious question is the definition of our DevDir variable.

For this is propose a central msbuild xml at the level of the vs solution!

We include it in our targets file, right after the root Project tag starts:

<ImportGroup>
  <Import Project="$(SolutionDir)packagesoverride.xml.user" Condition="Exists('$(SolutionDir)packagesoverride.xml.user') and '$(HAS_packagesoverride)' != 'True'" />
</ImportGroup>

This line imports the msbuild xml file, if it exists. Notice, how the file name is generic and not related to our specific package. This is because multiple nuget packages can share this file!

The content of this central configuration is very simple:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <HAS_packagesoverride>True</HAS_packagesoverride>
    <NugetDevPackageTest_testLib_DevDir>C:\Dev\SomeProject\Dir</NugetDevPackageTest_testLib_DevDir>
  </PropertyGroup>
</Project>

Now if you need to override a nuget package for your whole solution, just create this file!

Drawback 1: If the file did not exist previously, an you create it, then you need to rebuild all projects with nugets referencing this package. Because they might be affected by this file.

However, once the file does exist, changes to the file are currectly and automatically detected by visual studio, and build operations are correctly triggered in the affected projects. So, it might be a nice idea to keep a file with an empty property group in place, just in case.

If you need to override multiple nugets at once, just add multiple entries into this one property group.

Drawback 2: There is no UI. So you need to edit it in your favorite text editor, meaning, you are prone to all typing errors you can come up with.

All in all, I believe this central file for the nuget override configuration is an improvement.

Tagged with: ,

HtmlAgilityPackAs can be read on the internet: HtmlAgilityPack is not for beautiful, aka human readable, html files.

“[…] it’s a ‘by design’ choice.” [https://stackoverflow.com/a/5969074]

So everyone redirects you to some other library.

Now, I am a bit stubborn. I want to use HtmlAgilityPack and I want to have indented, human-readable html files. The magic is within text nodes in the DOM. So, I wrote two utility functions to help me out.

First, to get rid of all unwanted whitespaces. This one might be a bit aggressiv, but it was ok for me:

static private void removeWhitespace(HtmlNode node) {
  foreach (HtmlNode n in node.ChildNodes.ToArray()) {
    if (n.NodeType == HtmlNodeType.Text) {
      if (string.IsNullOrWhiteSpace(n.InnerHtml)) {
        node.RemoveChild(n);
      }
    } else removeWhitespace(n);
  }
}

And, second, to create white spaces for line breaks and indentions:

internal static void beautify(HtmlDocument doc) {
  foreach (var topNode in doc.DocumentNode.ChildNodes.ToArray()) {
    switch (topNode.NodeType) {
      case HtmlNodeType.Comment: {
          HtmlCommentNode cn = (HtmlCommentNode)topNode;
          if (string.IsNullOrEmpty(cn.Comment)) continue;
          if (!cn.Comment.EndsWith("\n")) cn.Comment += "\n";
        } break;
      case HtmlNodeType.Element: {
          beautify(topNode, 0);
          topNode.AppendChild(doc.CreateTextNode("\n"));
          //doc.DocumentNode.InsertAfter(doc.CreateTextNode("\n"), topNode);
        } break;
      case HtmlNodeType.Text:
        break;
      default:
        break;
    }
  }
}

private static bool beautify(HtmlNode node, int level) {
  if (!node.HasChildNodes) return false;

  var children = node.ChildNodes.ToArray();
  bool onlyText = true;
  foreach (var c in children) {
    if (c.NodeType != HtmlNodeType.Text) onlyText = false;
  }
  if (onlyText) return false;

  string nli = "\n" + new string('\t', level);

  foreach (var c in children) {
    node.InsertBefore(node.OwnerDocument.CreateTextNode(nli), c);
    if (c.NodeType == HtmlNodeType.Element) {
      if (c.HasChildNodes) {
        if (beautify(c, level + 1)) {
          c.AppendChild(c.OwnerDocument.CreateTextNode(nli));
        }
      }
    }
  }
  return true;
}

As you might see, the code is pretty hacky. But, it works for me. Maybe, it also works for you, or it can be a starting point.

Tagged with: ,

This article discusses a way how to create and use a NuGet Package for Libraries which are not yet stable and which are still under active development.

Especially, the core idea is to develop such an library and a project consuming this library in an efficient way.

Concept

As we develop on Windows with Visual Studio, NuGet is a powerful tool to manage library dependencies in Projects, for external 3rd Party libraries, as well as for our own libraries. Using Git our alternatives would be:

  • Git Submodules – with their known problems (detached head, multiple commits, etc.)
  • cmake – vs. Visual Studio solutions
  • manual paths / zip packages – manual work

Using NuGet seems the way to go for Windows-Only projects. Even for native c++ components. (Side note: coapp is dead and is not missed.)

The Problem

However: if we have a library component and a consuming project, which we have to actively develop both at the same time, the NuGet process gets prohibitively tiring. For each change in the library, we need to build the lib, repack the NuGet package, increasing its (build) version number, update the consuming project’s NuGet reference, and build the project… For every tiny typo.

The Idea

The core idea is to prepare the NuGet package in such a way, that a local override of the package’s content is possible. Think of an environment variable holding the file system path to the library’s development directory. If this environment variable is not set, the content of the NuGet package is use. If the environment variable is set, then include paths and library paths are set to directly use the files found in that development directory (obviously, resulting in an unclean in-development version of the project).

Since native NuGet packages simply use an MSBuild script to integrate themselves into a project, we have (almost) all the possibilities MSBuild has to offer.

Set up the NuGet Package

Creating a native C++ library NuGet package without coapp is not very difficult, but out-of-focus of this article. For the sake of simplicity I only discuss the (second) easiest task: a static library. We need some header file and some static library files. All in all the whole file looks like this:

NugetDevPackageTest.testLib.nuspec

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
  <!--
    Create package:
    cd solution directory
    nuget pack NugetDevPackageTest.testLib.nuspec -Version 0.2.0-build001
    -->

  <!-- package metadata -->
  <metadata>
    <id>NugetDevPackageTest.testLib</id>
    <version>$version$</version>
    <title>NugetDevPackageTest testLib</title>
    <authors>Sebastian Grottel</authors>
    <projectUrl>https://bitbucket.org/sgr_faro/nuget-dev-package-test/overview</projectUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Nuget package of `someLib` to test dev nuget package concept</description>
    <tags>nugetdevpackagetest testLib native</tags>
  </metadata>

  <!-- content -->
  <files>
    <!-- include header files -->
    <file src="DemoClass.h" target="build\native\include" />

    <!-- static libraries -->
    <file src="lib\Win32\Debug\some.lib" target="build\native\lib\Win32\Debug" />
    <file src="lib\Win32\Release\some.lib" target="build\native\lib\Win32\Release" />
    <file src="lib\x64\Debug\some.lib" target="build\native\lib\x64\Debug" />
    <file src="lib\x64\Release\some.lib" target="build\native\lib\x64\Release" />

    <!-- build rules -->
    <file src="NugetDevPackageTest.testLib.targets" target="build\native" />

  </files>
</package>

There are two interesting aspects to highlight here:

  1. The static libraries can be collected from any directories. Important is that you specify the correct target directories. Follow the semantic rules NuGet specifies.
  2. The first build rule file, NugetDevPackageTest.testLib.targets, controls the integration of the NuGet Package’s content into the Visual Studio project. This file must have the same name as the package itself.

We set up the targets file as follows:

NugetDevPackageTest.testLib.targets

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <!-- If development directory is empty, use content from the package -->
  <PropertyGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' == ''">
    <__NugetDevPackageTest_testLib_IncludeDir>$(MSBuildThisFileDirectory)include/</__NugetDevPackageTest_testLib_IncludeDir>
    <__NugetDevPackageTest_testLib_LibrarayPath>$(MSBuildThisFileDirectory)lib\$(Platform)\$(Configuration)\some.lib</__NugetDevPackageTest_testLib_LibrarayPath>
  </PropertyGroup>

  <!-- If development directory is set, use the content found there -->
  <PropertyGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' != ''">
    <__NugetDevPackageTest_testLib_IncludeDir>$(NugetDevPackageTest_testLib_DevDir)/</__NugetDevPackageTest_testLib_IncludeDir>
    <__NugetDevPackageTest_testLib_LibrarayPath>$(NugetDevPackageTest_testLib_DevDir)\lib\$(Platform)\$(Configuration)\some.lib</__NugetDevPackageTest_testLib_LibrarayPath>
  </PropertyGroup>

  <!-- If development directory is set, define an additional macro -->
  <ItemDefinitionGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' != ''">
    <ClCompile>
      <PreprocessorDefinitions>HAS_NUGETDEVPACKAGETEST_TESTLIB_DEVDIR;%(PreprocessorDefinitions)</PreprocessorDefinitions>
    </ClCompile>
  </ItemDefinitionGroup>

  <!-- Compiler options: package macro and include paths -->
  <ItemDefinitionGroup>
    <ClCompile>
      <PreprocessorDefinitions>HAS_NUGETDEVPACKAGETEST_TESTLIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
      <AdditionalIncludeDirectories>$(__NugetDevPackageTest_testLib_IncludeDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
    </ClCompile>
    <ResourceCompile>
      <AdditionalIncludeDirectories>$(__NugetDevPackageTest_testLib_IncludeDir);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
    </ResourceCompile>
  </ItemDefinitionGroup>

  <!-- Linker options: library (full) path -->
  <ItemDefinitionGroup>
    <Link>
      <AdditionalDependencies>$(__NugetDevPackageTest_testLib_LibrarayPath);%(AdditionalDependencies)</AdditionalDependencies>
    </Link>
  </ItemDefinitionGroup>

</Project>

Create the NuGet package:

nuget pack NugetDevPackageTest.testLib.nuspec -Version 0.2.0-build002

(obviously, you adjust the version number as you like to.) and be happy that it already does work as you intended: if the environment variable NugetDevPackageTest_testLib_DevDir is set, the content of this folder is used instead of the NuGet package’s content.

Setting an environment variable and the need to restart Visual Studio any time you change it, this is obviously not great. We can improve on that by adding a GUI.

Visual Studio Property Page

We add an Visual Studio property page for your project to the NuGet package, allowing us to change the setting of the variable from within Visual Studio. For this we add an XML description of the values we want to change:

overrideSettings.xml

<?xml version="1.0" encoding="utf-8"?>
<ProjectSchemaDefinitions xmlns="clr-namespace:Microsoft.Build.Framework.XamlTypes;assembly=Microsoft.Build.Framework">

  <!-- Project property settings to override content of nuget development packages -->
  <Rule Name="ProjectSettings_NugetDevOverride" PageTemplate="tool" DisplayName="Nuget Development Override" SwitchPrefix="/" Order="1">

    <!-- Category to identify this package -->
    <Rule.Categories>
      <Category Name="NugetDevPackageTest_testLib" DisplayName="NugetDevPackageTest.testLib" />
    </Rule.Categories>

    <!-- Store override settings only in the user file -->
    <Rule.DataSource>
      <DataSource Persistence="UserFile" ItemType="" />
    </Rule.DataSource>

    <!-- Override development path -->
    <StringProperty Name="NugetDevPackageTest_testLib_DevDir"
      DisplayName="Development Directory"
      Subtype="folder"
      Description="Override content of Nuget package with content of this directory. If this field is empty (default) the content of the nuget package is used. This value is persistent as user setting only."
      Category="NugetDevPackageTest_testLib" />

  </Rule>
</ProjectSchemaDefinitions>

Note that the StringProperty directly stores it’s value in the variable we are already using. Make sure not to use this variable as an Environment variable anymore!

Now, we reference this property page from the other files of our NuGet package:

NugetDevPackageTest.testLib.nuspec

...
    <!-- build rules -->
    <file src="NugetDevPackageTest.testLib.targets" target="build\native" />
    <file src="overrideSettings.xml" target="build\native" />

  </files>
</package>

 

NugetDevPackageTest.testLib.targets

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <!-- include the gui property page to override the development directory -->
  <ItemGroup>
    <PropertyPageSchema Include="$(MSBuildThisFileDirectory)\overrideSettings.xml" />
  </ItemGroup>

  <!-- If development directory is empty, use content from the package -->
...

Now create the new version of the package. Update the package reference in your consuming project. You might need to restart visual studio once. And you should find the new property page in the settings of your project.

Use the NuGet Package or the local Version

Be Aware of all the pitfalls

  • Never commit the .user file of your projects to the repository!
  • If your project code requires local changes within the code usually packed into NuGet, your project will not compile correctly on any other machine!
    • Before you push, or at least before you create a pull request, you need to push and publish the changed NuGet project and you need to reference the NuGet package cleanly without override.
  • You need to use two Visual Studios running, one for your project and one for the NuGet-ted project.
    • If you run your project in the debugger, and you step into functions from the NuGet project, the corresponding source file is opened in the Visual Studio of your project. You can edit the files there, but of course you cannot compile the NuGet library here. Remember to switch to the right Visual Studio for that.

The Proposed Process

testApp – Visual Studio #1 testLib – Visual Studio #2 Comment
Clone testApp
Open testAppsolution in Visual Studio #1
Build testApp This fetches the NuGet package of testLib and builds the App with the content of that package, e.g. version 1.0.0
Clone testLib
Open testLib solution in Visual Studio #2
Build testLib
(Build testApp) Nothing changes. Still the NuGet package is used. Build should not do anything (up to date)
Open Project settings and override the nuget settings. Reference the directory where you cloned and built testLib
(Build testApp) Known Issue: Visual Studio still believes the project would be up to date
Rebuild testApp App is now built with the testLib built in your local directory!
(Build testApp) The project is up to date (which is correct)
Change some code and build testLib
Build testApp The project correctly detects that it needs to build (at least to link) again. The changes of the local testLib can directly be used.
You start testApp in the Debugger You can even enter code of testLib, look at local / private variables, etc. The corresponding source files of testLib are opened in Visual Studio #1!
You edit source of testLib WRONG Visual Studio!
You hit Build → testApp is up to date It is the wrong Visual Studio. You cannot build testLib there. You need to switch to the other Visual Studio
Visual Studio might detect the changes at the source files, if they are opened. If the files are not opened, everything is ok. If Visual Studio does ask you, select to keep changes made outside Visual Studio #2
Hit F5 (start Debugging) Binary and source code do not match! Visual Studio #1 should warn you about it if you try, e.g., to set a breakpoint in source code of testLib
Build testLib The testLib is update with the changes to the source code you made in Visual Studio #1
Hit F5 (start Debugging) Visual Studio #1 correctly detects the updates of testLib, builds again, and executes testApp with the newest version of testLib.
 

 

 

Develop your changes in testApp and testLib. Now you are finished and want to push a stable version to your CI system

Be sure testLib is correct in all configurations
Commit + Push testLib
Trigger built / update of new NuGet package (we call it Ver++)
Disable the override settings for the NuGet package
Update the testApp project to use the NuGet package Ver++ Ideally this package should be the result of your CI system.
Compile and testApp in all configurations Since you had the identical code tested before locally, this should not show any problems. If it does, you likely missed something before.
Commit + Push your final update of testApp This should be solely the increment of the NuGet package version
Build / Test your testApp on your CI system All should work fine.

Known Issues

Visual Studio Property Pages GUI

The Visual Studio GUI is sometimes not correctly updated. One such situation is created by the property page descriptions injected via NuGet packages.

If you initially add a NuGet package with such a property page, or if the NuGet package you are using changed its property page, Visual Studio might not show you the right proptery page (or might not show any property page).

If this happens, restart Visual Studio after installing / restoring the NuGet package and you should be fine.

Visual Studio’s Fast Up to Date Check misses Changes of the StringProperty

When in Visual Studio a build is triggered (e.g. by pressing F5 for a Debug start), Visual Studio does not invoke MSBuild to determine if the project is up to date. Instead is uses an internal mechanism to quickly decide if a rebuild is required. This internal mechanism, known as fast up to date check, does not perform all required checks. E.g., it misses changed to the paths defined by the variable defined within the property page GUI created by our NuGet package.

This means, whenever the value of this variable is changed, you should trigger a rebuild of the consuming project.

One workaround would be to disable the fast up to date check for this project. However, for large projects, like SCENE, this introduces more pain than the required rebuild.

cf.: https://stackoverflow.com/q/47516374/552373

Version Schizophrenia

There is an semantic problem: you reference a Nuget package version, e.g. 1.0.0. But you override the content with changed files. Those will eventually be published as a new version, e.g. 1.0.1. But until then, your consuming project will reference those as version 1.0.0.

Your build process must be aware of this issue.

You should introduce integration tests to make sure such information mismatch never enters release branches.

Deep Dependency Graph Version Conflict

Consuming the unstable library from the development directory, is a project setting, not a solution setting. If you have a deep dependency graph and you are consuming this library via multiple routes, the override setting only influences one single route! Especially, if you use another Nuget package which uses your lib nuget, you cannot do anything about it. You need to be carefully aware of potential binary compatibility issues in such a construct.

Workarounds could be based compile-time and run-time checks of version numbers and the “dirty-lib-flag” (cf. HAS_NUGETDEVPACKAGETEST_TESTLIB_DEVDIR).

Tagged with: ,

And now there is Voro++ on Nuget. Voro++ is a library and utility, written by Chris Rycroft (with Applied Mathematics in Harvard SEAS) for compute Voronoi cells für 3D data. I use his lib actually to compute natural neighbors in my particle data sets. For me, Voro++ is better suited than other libs, like e.g. qhull, for several reasons: it is easy to use, streamlined for 3D, and natively support periodic boundary conditions. The single drawback I found, there are no official binary releases for Windows. And that is why I started this nuget package.

The library’s source code is rather clean and compiles without problem in Visual Studio. So all I did, was compiling the static library for the four latest Visual C++ versions, and I put the nuget package together. My Visual Studio solution and project files are also publicly available on bitbucket: https://bitbucket.org/sgrottel_nuget/voro

The package contains the static library, the header files, some files from the documentation, and the command line tool built with MSVC 2015. If you only want the command line tool and don’t care for the lib, simply download the nuget package, unzip it (yes, nuget packages are basically zip files), and you’ll find the exe in the tools subdirectory. For the full documentation of the library, go the Chris’ website or the original source code package.

What I didn’t include in the nuget package are any samples. But compiling them for yourself is as simple as it could be:

1. start a new C++ console application in visual studio.

Be sure to deactivate precompiled headers and to make the project empty.

voromsvc01

2. drag-drop-add any cc-example.

Just use any file from Chris’ source code package into the project.

voromsvc02

3. Add the voroplusplus nuget package.

voromsvc03

4. And you’re done! Compile it and run it by a push of a button.

Have fun!

Tagged with: ,

WinForms De-Blurred

The Problem

We have projects working with WinForms GUIS. Different Developers work with different Computers, and these have different DPI settings. Every time the Project is opened on a system with different DPIs the WinForms get scaled, which is painfully bad.

Solution Summary

  • Set in your Forms Designer: Font = MS Sans; 11px
  • In the Forms Ctor, after InitializeComponent, set: Font = SystemFonts.DefaultFont
  • Enable DPI-Awareness, either through a manifest or by API function SetProcessDPIAwareness

Solution Details

mmconf_blurless
Make your WinForms GUI less blurry by being DPI aware

A modern Windows approach to High-DPI Displays

More and more Computers get equipped with High-DPI Displays, a trend I like very much. Pixels cannot be small enough. With modern Windows, however, GUI of older Applications get infamously blurry. This is due to Microsoft’s approach for backward-compatibility: applications will be rendered at their native DPI and scaled up to the system’s DPI. This way, the application does not need to know anything about DPI, but user controls will keep a decent size on any setting. While most people hate the blurriness of old GUIs on modern Windows, this approach does make a lot of sense:

  • Backward-Compatibility is instantly given, as nothing changes for the old Applications.
  • The users input experience is retained, as the GUI will keep its apparent size. (Just thing, if you’re old enough, of the original GUI of Winamp, where the buttons in the default skin were sometimes just a few Pixels in size.)
  • (last but not least) for new Applications, Developers hopefully get upset with the blurry look, that they actually invest some time, to do it right!

So, don’t claim it’s all Microsoft’s fault that the GUI of your applications look blurry. Truth is, you were just too lazy to do it right.

Why WinForms?

If you browse the internet in search for how to handle high-DPI settings with WinForms, you are bound to stumble upon a smart-ass telling you to switch to WPF. That person does have a point: WPF is designed to be a GUI for all resolutions. But, that person is also an idiot. Don’t bother discussing.

If you decided to use WinForms, then use WinForms. It is not deprecated, it is not legacy, it is not broken. There are good reasons to use WinForms. One, for example, would be the nice compatibility with Mono (Linux and MacOS). Another one would be compatibility with native GUI controls. Whatever your reason is, don’t let other people easily throw you off track.

If you’re not fixed on WinForms, but want to write a new Application for Windows, then have a look at WPF.

Why does Visual Studio Scale?

Normally Windows works at 96 DPI. That is, you need 96 pixels to fill up one inch of screen space. On a higher setting, let’s say 144 DPI, you need 144 pixels. So, either your GUI elements will look smaller, or your GUI elements must be larger to look the same. Modern graphical content is thus not described in pixels, but often in points (pt). Points are defined as 1/72 inch, that is in screen space, and not in pixels.

WinForms is not a modern GUI. All Elements are designed with pixels. However, higher DPI settings were present in Windows for a long time (accessibility feature). WinForms answers to this by having a mechanism to scale the whole GUI ‘manually’. If a scaling factor different from one is determined, all GUI elements, positions, sizes, etc., are multiplied by that factor, including the overall size of the window. By default, this scaling factor is determined comparing the Font settings of the form. Fonts are usually specified with a size in pt. Windows computes the font size in pixels based on the active DPI value. If WinForms now detects a mismatch between the pixel-based font sizes between design-time specifications and run-time evaluation, the form and all content is scaled. And this is exactly what happens in the Visual Studio Windows Forms designer.

Visual Studio does basically nothing at all. However, the font size evaluation is based on the system’s DPI setting. So on high-DPI systems, the font’s pixel size will be different from the stored design-time pixel size, and thus the whole form will be scaled accordingly. That is a good idea at run-time. I mean, that is the whole point of this mechanism. However, we are still at design time. The problem raises, because the Forms designer in Visual Studio actually runs the WinForms engine. As now all GUI elements change their sizes, the designer is informed (likely by the normal events) and adjusts all generated code to the new sizes and locations. This is, of course, ugly, painful and stupid, if you are going to continue the development on another machine with another, maybe, lower DPI setting.

Disable Scaling at Design-Time, Enable Scaling at Run-Time

What I am writing here is not a premium solution. It is the workaround I found for myself to work best.

The basic idea is to (manually) disable scaling at design time, and to (manually) enable scaling at run time.

I write scaling, but what we actually change is the Font!

Keep the Form’s AutoScaleMode = Font. That setting is correct and is not the problem at all. The problem is the automatically used font. It is the system’s default font, which specifies its size in pt. Again, a good idea at run time. What we change is the Font setting of the Form, to specify the size in pixels.

In the Designer, set the Font to: Microsoft Sans Serif; 11px

Windows default Font is Microsoft San Serif 8 pt. according to MSDN. Actually, it seems more like 8.25 pt. So this is 8.25/72 inch, which finally results in 8.25 * 96/72 = 11 pixel on a normal DPI system. That is why you set the font to this painfully small looking value. It is the right one! Now edit your GUI on all systems you have. Your Forms will not be scaled by Visual Studio any more. So, design time is fixed.

Now for the run time. That one is easy, too. All we need to do is to ‘reset’ the Form’s font to have its size specified in pt. again. The easiest way to do that is to reassign the system’s default font. Just set it in the constructor, right after InitializeComponent:

Font = SystemFonts.DefaultFont;

This, of course, only works if you are on a system where the system’s default font is as expected, and only if you do not change fonts for the controls inside your form. If you did change some control’s font, you specify the font with pixel size for design time and you update these at run time initialization to use pt.-based sizes again.

Scalable GUI Design

And that is that. If your application is already DPI aware, your forms will now scale nicely. That is, if you designed them correctly.

  • You should not mix docking-based and anchor-based design in the same form. That is bound to produce weird scaling issues.
  • You must use either, otherwise the controls will just randomly shift somewhere.
  • Be aware that the control might no longer fit into your Form, due to the maximum window size. Use flowing layout containers and auto scrollbars.

Enable DPI Awareness

All that, of course, only makes any sense if you enable DPI awareness for your application. There are basically two options:

Manifest-Segment I use:

<application xmlns="urn:schemas-microsoft-com:asm.v3">
  <windowsSettings>
    <dpiAware xmlns="http://schemas.microsoft.com/SMI/2005/WindowsSettings">true</dpiAware>
  </windowsSettings>
</application>

Code I use:

[DllImport("SHCore.dll")]
private static extern bool SetProcessDpiAwareness(PROCESS_DPI_AWARENESS awareness);

private enum PROCESS_DPI_AWARENESS {
    Process_DPI_Unaware = 0,
    Process_System_DPI_Aware = 1,
    Process_Per_Monitor_DPI_Aware = 2
}

[DllImport("user32.dll", SetLastError = true)]
static extern bool SetProcessDPIAware();

internal static void TryEnableDPIAware() {
    try {
        SetProcessDpiAwareness(PROCESS_DPI_AWARENESS.Process_Per_Monitor_DPI_Aware);
    } catch {
        try { // fallback, use (simpler) internal function
            SetProcessDPIAware();
        } catch { }
    }
}

Maybe, you know, that modern Windows can be even more complicated by per-monitor DPI settings. The idea is, that attached external screens have different sizes and resolutions, and should thus be handled with different DPIs. The good news is: this approach here works instantly with per-monitor DPI. When the form is dragged onto another monitor, the system automatically adjusts the font setting, as the font size is at run time specified in points. This automatically triggers a rescaling of the form. Wheee!

Tagged with: , , ,