I came across that Nuget because I was writing WPF UI applications and I needed a folder picker. Someone in the internet suggests to use the WinForms dialogs. I sort of hate using two frameworks. And then another one brought up the Nuget WindowsAPICodePack.Shell with it’s class representation of the Windows Common Dialogs, including the capability of picking folders in the open dialog window.

So, I started using that. Then, at some point, a friend pointed out to me, that the Nuget package I was using did not look like an official Microsoft package, but a repackage someone did. That made me stop and think. I don’t assume any bad intent, but I found it very very strange. The official package vanished. And there is a huge load of packages which strange names:

I don’t like this. Even if there is no ill intent from any of the authors, I still don’t like this, as it reeks like fraught, phishing, and vulnerabilities. Sorry, but, no.

So, where is the official package? Seems to have disappeared, that’s why there are the repackages by community members. Why did it disappear? No idea. Maybe it got caught in a semi-automatic cleanup as it was orphaned. Someone suggestion it’s replaced by Microsoft.Windows.SDK.Contracts.

In the end, I replaced the code in my projects by either using the WinForms dialog, or by writing a very small p/invoke wrapper class calling the Win32 API directly. If you are interested, have a look:

https://github.com/sgrottel/open-here/commit/9de68198e35f0f6dec9386372cc71bada54c2f5b

The moral of the story is, a Nuget package is only as good as the people maintaining it. And, I mean people, not organizations. Because in the end, it’s whether or not individuals want to give their best.

Git has this cursed function to fuck up your files by doing unspeakable things to your line endings.

For example, from Githubs Documentation on Line Endings:

On Windows, you simply pass true to the configuration. For example:

$ git config –global core.autocrlf true

Please, never never never never never never never never never never never never do this!

THERE IS NO REASON TO DO IT!

Git is here to keep track of our files, NOT TO CHANGE OUR FILES IN ANY WAY.

So, please, just, never never never never never never never never never never never never do this! Leave my file endings alone!

Today I release a new version of my Checkouts Overview tool.

Version 1.1 is a feature release, improving the scanning of your hard disks searching for repository checkouts, and adding the ability to perform a git fetch while updating the entry status.

Some minor improvements to the UI also provide a more consistent look and feel.

Grab the release from Github: Release Feature Release v1.1 – Better Disk Scanning and Git Fetch · sgrottel/checkouts-overview (github.com)

Today I present you the Checkouts Overview tool.

https://github.com/sgrottel/checkouts-overview
https://go.grottel.net/checkouts-overview

What? Why? Because this little tool helps me.

In my private setup, I have a lot of smaller repos checked out, and work on them only occasionally. In addition, I got several repos to collect the history of some text documents. Some of those repos are synced against servers which are only occasionally online, partially for power saving or partially due to VPN and network connectivity stuff. As a result, I often keep losing track of the sync states of all the different repos.

Is everything checked in? — most times, yes. If the change was complete.

Is everything pushed? — maybe.

Am I on branches? — no idea.

You might not need this app, if you have a better structured work process with your stuff than I do. I don’t, so I need help by a tool, by this tool.

If you are interested, you find more info in the app’s github repository.

Note: and as for the app’s icon, it’s about (repository) clones, right.

Yes, I am still using AntTweakBar. As you might know, the development of AntTweakBar is discontinued. At some point in the future, I will switch. Currently, I consider imgui the best successor. But I haven’t had time to look into imgui. So, when I resurrected an old small tool of mine, it still used ATB, and I did not want to recode all of this. But out of “because-I-can,” I decided  to update all dependencies to their newest versions. As a result the ATB integration with GLFW 3 did not work any longer. A couple of callback functions where changed between GLFW 2 and GLFW 3. I ended up rewriting my glue code between those two libraries.

Here it is, if any of you ever come across the same issue. First the callbacks:

static void keyCallback(GLFWwindow* window, int key, int scancode, int action, int mods)
{
#ifdef HAS_ANTTWEAK_BAR
  if (action == GLFW_PRESS || action == GLFW_REPEAT)
  {
    int twMod = 0;
    bool ctrl;
    if (mods & GLFW_MOD_SHIFT) twMod |= TW_KMOD_SHIFT;
    if (ctrl = (mods & GLFW_MOD_CONTROL)) twMod |= TW_KMOD_CTRL;
    if (mods & GLFW_MOD_ALT) twMod |= TW_KMOD_ALT;

    int twKey = 0;
    switch (key)
    {
    case GLFW_KEY_BACKSPACE: twKey = TW_KEY_BACKSPACE; break;
    case GLFW_KEY_TAB: twKey = TW_KEY_TAB; break;
    //case GLFW_KEY_???: twKey = TW_KEY_CLEAR; break;
    case GLFW_KEY_ENTER: twKey = TW_KEY_RETURN; break;
    case GLFW_KEY_PAUSE: twKey = TW_KEY_PAUSE; break;
    case GLFW_KEY_ESCAPE: twKey = TW_KEY_ESCAPE; break;
    case GLFW_KEY_SPACE: twKey = TW_KEY_SPACE; break;
    case GLFW_KEY_DELETE: twKey = TW_KEY_DELETE; break;
    case GLFW_KEY_UP: twKey = TW_KEY_UP; break;
    case GLFW_KEY_DOWN: twKey = TW_KEY_DOWN; break;
    case GLFW_KEY_RIGHT: twKey = TW_KEY_RIGHT; break;
    case GLFW_KEY_LEFT: twKey = TW_KEY_LEFT; break;
    case GLFW_KEY_INSERT: twKey = TW_KEY_INSERT; break;
    case GLFW_KEY_HOME: twKey = TW_KEY_HOME; break;
    case GLFW_KEY_END: twKey = TW_KEY_END; break;
    case GLFW_KEY_PAGE_UP: twKey = TW_KEY_PAGE_UP; break;
    case GLFW_KEY_PAGE_DOWN: twKey = TW_KEY_PAGE_DOWN; break;
    case GLFW_KEY_F1: twKey = TW_KEY_F1; break;
    case GLFW_KEY_F2: twKey = TW_KEY_F2; break;
    case GLFW_KEY_F3: twKey = TW_KEY_F3; break;
    case GLFW_KEY_F4: twKey = TW_KEY_F4; break;
    case GLFW_KEY_F5: twKey = TW_KEY_F5; break;
    case GLFW_KEY_F6: twKey = TW_KEY_F6; break;
    case GLFW_KEY_F7: twKey = TW_KEY_F7; break;
    case GLFW_KEY_F8: twKey = TW_KEY_F8; break;
    case GLFW_KEY_F9: twKey = TW_KEY_F9; break;
    case GLFW_KEY_F10: twKey = TW_KEY_F10; break;
    case GLFW_KEY_F11: twKey = TW_KEY_F11; break;
    case GLFW_KEY_F12: twKey = TW_KEY_F12; break;
    case GLFW_KEY_F13: twKey = TW_KEY_F13; break;
    case GLFW_KEY_F14: twKey = TW_KEY_F14; break;
    case GLFW_KEY_F15: twKey = TW_KEY_F15; break;
    }
    if (twKey == 0 && ctrl && key < 128)
    {
      twKey = key;
    }
    if (twKey != 0)
    {
      if (::TwKeyPressed(twKey, twMod)) return;
    }
  }
#endif
}

static void charCallback(GLFWwindow* window, unsigned int key)
{
#ifdef HAS_ANTTWEAK_BAR
  if (::TwKeyPressed(key, 0)) return;
#endif
}

static void mousebuttonCallback(GLFWwindow* window, int button, int action, int mods)
{
#ifdef HAS_ANTTWEAK_BAR
  if (::TwEventMouseButtonGLFW(button, action)) return;
#endif
}

static void mousePosCallback(GLFWwindow* window, double xpos, double ypos)
{
#ifdef HAS_ANTTWEAK_BAR
  if (::TwEventMousePosGLFW((int)xpos, (int)ypos)) return;
#endif
}

static void mouseScrollCallback(GLFWwindow* window, double xoffset, double yoffset)
{
#ifdef HAS_ANTTWEAK_BAR
  static double pos = 0;
  pos += yoffset;
  if (::TwEventMouseWheelGLFW((int)pos)) return;
#endif
}

static void resizeCallback(GLFWwindow* window, int width, int height)
{
#ifdef HAS_ANTTWEAK_BAR
  ::TwWindowSize(width, height);
#endif
}

Of course, you can omit the #ifdefs if you don’t care. Add your own codes to the functions after ATB has been handled.

Then, it’s just your typical initialization of GLFW callbacks:

::glfwSetKeyCallback(window, keyCallback);
::glfwSetCharCallback(window, charCallback);
::glfwSetMouseButtonCallback(window, mousebuttonCallback);
::glfwSetCursorPosCallback(window, mousePosCallback);
::glfwSetScrollCallback(window, mouseScrollCallback);
::glfwSetWindowSizeCallback(window, resizeCallback);

Some while ago, two of my colleagues were putting effort into our main code base and build system, to migrate to Visual Studio 2017, and the C++17 standard. Admirable and sensible. Of course, that was reason enough for another colleague of mine and myself to joke around about downgrading our code base to C++03 or C++98 or maybe even downright to C. Don’t worry, we all four were laughing. (Or were we?)

At that time, my joke-buddy pointed me to a blog post by aras-p about Modern C++ Lamentations. Read it! It’s worth it. And don’t go, “that’s maybe in gaming industry. Doesn’t apply to my work.” Well, I am not working in gaming industry. You know what: It does apply to my work pretty much 100%.

In my opinion, “modern” C++ is too complex, too bloated, too much of a poser for “look I can do cool code”, and misses the point of solving problems.

[…] to me this feels like someone decided that “Perl is clearly too readable, but Brainfuck is too unreadable, let’s aim for somewhere in the middle”.

Many language features are valid, other as just “cool.” Now, of course, I understand, that different people will find different parts of the language good. There are some aspects, however, which are objectively bad. Look at compile times and debug times mentioned in this article. At least those make a very valid point.

C++ compilation times have been a source of pain in every non-trivial-size codebase I’ve worked on. […] Yet it feels like the C++ community at large pretends that is not an issue, with each revision of the language putting even more stuff into header files, and even more stuff into templated code that has to live in header files.

I have been a hobby programmer in school; was a part time programmer while being a student of software engineering; made my Ph.D. in computer science on computer graphics and visualization, while writing a large-scale modular, high-performance visualization software; worked as senior software developer in a company; and I am now manager of a team of software engineers. I think it is valid to say, I have been programming almost my whole life. I still try to do some minor improvements or bug fixes, even as a manager. Most likely me team is thinking I should stop messing in “their code.” I won’t. My point is:

I have been programming almost my whole life. And I did it in more than a dozen different programming languages. (While writing this I counted 15, not including scripting languages. But most likely I forgot some.) Given this experience, let me say this:

C++ is not the best programming language. In modern C++ not everything has improved.

Please! Start (again) thinking “How do I solve this problem,” and not “How do I solve this problem with variadic templates wrapped in lambdas with ranges because they are so cool.”

While I was lecturing at the university on C++ for computer graphics, clear as daylight, you can see the different types of uprising programmers. And there is this specific sub-type of “programming artists.” Programmers, who think their source code is art and above and beyond trivial programs others do. I will not comment on those any further. But I noticed, in the field where C++ is used, especially so-called modern C++, those guys are seen pretty often! Sad.

As a closing note: Nowadays, when I start a project and think about which programming language(s) to use, C++ is not on the top of the list anymore.

One of my old computer science professors, back in the days, used to say, if you use a debugger while you are writing your code, you are a bad programmer. My, oh my. What an idiot. It makes perfect sense to utilize a debugger as you proceed in completing your program. It’s a simple variant of divide and conquer. Let’s make sure one part does work, before we move on to the next. So, you see, I really value my debugger.

Many small tools I write are simple console applications. I do like graphical user interfaces a lot. And I prefer a graphical user interface over a command line interface any time. But for some small tools, especially ones which do not even require any interaction, setting up a graphical user interface is just too much work. So, even so it really is very old-school, console applications are often the right choice.

This brings us to developing console applications and utilizing the debugger. Visual Studio has a very nice feature for this scenario, when working with c++: it keeps the console window open and reuses it. At first this might seem useless. I know a lot of people which just close the window every time their application stops. But there is a clear and huge benefit from this function: since the console window stays open, you can inspect you application’s output for as long as you like without having to keep the debugger attached or starting your application in a separate console. This actually is really handy.

For csharp console applications, however, this feature does not exist. I really do not know why. And, I hope the Microsoft will deliver this feature soon for csharp applications as well. But for now, csharp has this horrible behavior that the console window closes as soon as the application exits. And this brings us back into the past, where we need some mechanism to keep the window open. One possibility is to utilize the debugger, which is attached anyway, to pause the application. I don’t want to do this using “normal” break points, as I use break points to do actual debugging. Meaning, I often delete all break points, and then only set those I need. Having to take care for some “special” break points would be a pain in the … well, you know.

Luckily, we can break the debugger by code. Whipping up some utility class, I got this here:

static class DebugHelper
{
  [Conditional("DEBUG"), MethodImpl(MethodImplOptions.AggressiveInlining), DebuggerHidden]
  static public void Break()
  {
    bool launch;
    var env = Environment.GetEnvironmentVariable("LAUNCH_DEBUGGER_IF_NOT_ATTACHED");
    if (!bool.TryParse(env, out launch))
      launch = false;
    if (launch || Debugger.IsAttached)
    {
      if (Debugger.IsAttached || Debugger.Launch())
        Debugger.Break();
    }
  }
}

Now, I can just call DebugHelper.Break(); anywhere I like.

They are removed in release builds. And the aggressive optimization removes the helper function from the stack, so that the debugger always breaks at the call of my helper function, and not within.

For now, this is handy. And, I really hope, that in the near future this will be obsolete.

Previously, I wrote about using one global msbuild xml file to override nuget package content for local development. While this does work, it comes with a warning if multiple packages use this mechanism:

***Test\packages\***.0.7.1-prerelease-\build\native\***.targets(7,5):
warning MSB4011: "***Test\packagesoverride.xml.user" cannot be imported again. It was already imported at "***Test\packages\***.0.7.1-prerelease-\build\native\***.targets (6,3)".
This is most likely a build authoring error. This subsequent import will be ignored. [***Test\***Test.vcxproj]

While this is not realy a problem, it is a warning. And I don’t like warning. I like my projects to build entirly without warnings.

A soltion for this comes from classic c++ programming: use an include guard. These are the changes required:

The packagesoverride.xml.user must define a default variable. I named it HAS_packagesoverride:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <HAS_packagesoverride>True</HAS_packagesoverride>
    <NugetDevPackageTest_testLib_DevDir>C:\Dev\SomeProject\Dir</NugetDevPackageTest_testLib_DevDir>
  </PropertyGroup>
</Project>

And now importing this xml in the nuget packages’ target files can this for this variable to avoid multiple import:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" InitialTargets="FaroMorfCopySymbols">

  <!-- Import override settings, if they exist -->
  <ImportGroup>
    <Import
      Condition="Exists('$(SolutionDir)packagesoverride.xml.user') and '$(HAS_packagesoverride)' != 'True'"
      Project="$(SolutionDir)packagesoverride.xml.user" />
  </ImportGroup>

<!-- ... -->

 

Previously, I wrote about using NuGet for software components, which are still in active development. One of the most important factors was the capability to override the nuget package’s content with content fetched from a directory, e.g., a working copy clone. The key element for this was a MSBuild variable NugetDevPackageTest_testLib_DevDir.

The original plan was to edit this variable using a project property page. While having a UI is nice, this has proven not to work on larger projects. The reason is simple: in larger projects, we talk about a vs solution with multiple vc projects, and many of these projects might reference our NuGet package. If we now need to switch to our local directory, we need to adjust the project properties for every project consuming the package. This is tiring and error prone. Forget to adjust just one project and you might end up with inconsistent builds. Therefore, I was seeking a more centralized configuration.

Update 2019-03-02

I updated the code examples to reflect the updates I recently came up with.

Dev. Override – II

The principle idea of having a variable to control the override remains valid. The targets in your nuget might look like this:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <!-- ... -->

  <!-- Compiler settings: defines and includes -->
  <ItemDefinitionGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' == ''">
    <ClCompile>
      <PreprocessorDefinitions>HAS_NUGETDEVPACKAGETEST_TESTLIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
      <AdditionalIncludeDirectories>$(MSBuildThisFileDirectory)include\;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
    </ClCompile>
  </ItemDefinitionGroup>
  <ItemDefinitionGroup Condition="'$(NugetDevPackageTest_testLib_DevDir)' != ''">
    <ClCompile>
      <PreprocessorDefinitions>HAS_NUGETDEVPACKAGETEST_TESTLIB;HAS_NUGETDEVPACKAGETEST_TESTLIB_DEVDIR;%(PreprocessorDefinitions)</PreprocessorDefinitions>
      <AdditionalIncludeDirectories>$(NugetDevPackageTest_testLib_DevDir)\Project\include\;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
    </ClCompile>
  </ItemDefinitionGroup>

  <!-- ... -->

</Project>

The obvious question is the definition of our DevDir variable.

For this is propose a central msbuild xml at the level of the vs solution!

We include it in our targets file, right after the root Project tag starts:

<ImportGroup>
  <Import Project="$(SolutionDir)packagesoverride.xml.user" Condition="Exists('$(SolutionDir)packagesoverride.xml.user') and '$(HAS_packagesoverride)' != 'True'" />
</ImportGroup>

This line imports the msbuild xml file, if it exists. Notice, how the file name is generic and not related to our specific package. This is because multiple nuget packages can share this file!

The content of this central configuration is very simple:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <HAS_packagesoverride>True</HAS_packagesoverride>
    <NugetDevPackageTest_testLib_DevDir>C:\Dev\SomeProject\Dir</NugetDevPackageTest_testLib_DevDir>
  </PropertyGroup>
</Project>

Now if you need to override a nuget package for your whole solution, just create this file!

Drawback 1: If the file did not exist previously, an you create it, then you need to rebuild all projects with nugets referencing this package. Because they might be affected by this file.

However, once the file does exist, changes to the file are currectly and automatically detected by visual studio, and build operations are correctly triggered in the affected projects. So, it might be a nice idea to keep a file with an empty property group in place, just in case.

If you need to override multiple nugets at once, just add multiple entries into this one property group.

Drawback 2: There is no UI. So you need to edit it in your favorite text editor, meaning, you are prone to all typing errors you can come up with.

All in all, I believe this central file for the nuget override configuration is an improvement.

HtmlAgilityPackAs can be read on the internet: HtmlAgilityPack is not for beautiful, aka human readable, html files.

“[…] it’s a ‘by design’ choice.” [https://stackoverflow.com/a/5969074]

So everyone redirects you to some other library.

Now, I am a bit stubborn. I want to use HtmlAgilityPack and I want to have indented, human-readable html files. The magic is within text nodes in the DOM. So, I wrote two utility functions to help me out.

First, to get rid of all unwanted whitespaces. This one might be a bit aggressiv, but it was ok for me:

static private void removeWhitespace(HtmlNode node) {
  foreach (HtmlNode n in node.ChildNodes.ToArray()) {
    if (n.NodeType == HtmlNodeType.Text) {
      if (string.IsNullOrWhiteSpace(n.InnerHtml)) {
        node.RemoveChild(n);
      }
    } else removeWhitespace(n);
  }
}

And, second, to create white spaces for line breaks and indentions:

internal static void beautify(HtmlDocument doc) {
  foreach (var topNode in doc.DocumentNode.ChildNodes.ToArray()) {
    switch (topNode.NodeType) {
      case HtmlNodeType.Comment: {
          HtmlCommentNode cn = (HtmlCommentNode)topNode;
          if (string.IsNullOrEmpty(cn.Comment)) continue;
          if (!cn.Comment.EndsWith("\n")) cn.Comment += "\n";
        } break;
      case HtmlNodeType.Element: {
          beautify(topNode, 0);
          topNode.AppendChild(doc.CreateTextNode("\n"));
          //doc.DocumentNode.InsertAfter(doc.CreateTextNode("\n"), topNode);
        } break;
      case HtmlNodeType.Text:
        break;
      default:
        break;
    }
  }
}

private static bool beautify(HtmlNode node, int level) {
  if (!node.HasChildNodes) return false;

  var children = node.ChildNodes.ToArray();
  bool onlyText = true;
  foreach (var c in children) {
    if (c.NodeType != HtmlNodeType.Text) onlyText = false;
  }
  if (onlyText) return false;

  string nli = "\n" + new string('\t', level);

  foreach (var c in children) {
    node.InsertBefore(node.OwnerDocument.CreateTextNode(nli), c);
    if (c.NodeType == HtmlNodeType.Element) {
      if (c.HasChildNodes) {
        if (beautify(c, level + 1)) {
          c.AppendChild(c.OwnerDocument.CreateTextNode(nli));
        }
      }
    }
  }
  return true;
}

As you might see, the code is pretty hacky. But, it works for me. Maybe, it also works for you, or it can be a starting point.