Some time ago I started a section on my website here about tools I use and like. I started that series writing about the Everything search tool by Voidtools, which is lightning fast and awesome.

Since then I integrate Everything into several internal tools of mine. Most of the time, I used the Everything command line client and parsed its output. However, I had some trouble with Unicode file names. Then I looked at Dotnet library solutions, namely Everything .Net Client and EverythingNet. Both are basically only P/Invoke wrappers around the Everything SDK, which itself is a wrapper around Interprocess Calls (IPC) to the Everything service. And so, since I know my stuff around low level techniques like Windows Message based IPC, and since I don’t like wrappers of wrappers of functions, I decided to write a library of my own: Everything Search Client

It is a .Net 6.0 library, completely written in CSharp, with some P/Invoke calls to native Windows functions of the Operating System, and directly talking to the Everything service.

The code is available on Github and the ready-to-use nuget package is on Nuget.org.

If you find it useful and use it in a tool of your own, I would love to hear about it: Used By, How to Contribute

I present a new little tool with very specific purpose: OpenHere

It detects running instances of the Windows File Explorer. From the top-most instance it, fetches the opened path, and any selected files. You can use the command line application to retrieve this information. Or you can use the GUI application, displaying a tool window to select and open one of up to twelve configured tools.

My keyboard has several freely assignable macro keys, which I did not use for years. Simply, because I had no idea what to do with them. Then, Windows 11 came along. One of the maybe most criticized features is the new context menu in the file explorer, hiding away most functions you might or might not want to call on files and folders. That’s when I thought, it would be nice to use the macro keys to trigger something on the selected file, like opening it in Notepad++ or open the whole folder in Visual Studio Code or in Fork or something like that. And that’s what I wrote OpenHere for.

I learned quite a bit about the low level icon handling and loading of large icons, and I got to get more experience working with WPF.

Today I release a new version of my Checkouts Overview tool.

Version 1.1 is a feature release, improving the scanning of your hard disks searching for repository checkouts, and adding the ability to perform a git fetch while updating the entry status.

Some minor improvements to the UI also provide a more consistent look and feel.

Grab the release from Github: Release Feature Release v1.1 – Better Disk Scanning and Git Fetch · sgrottel/checkouts-overview (github.com)

I got a new tool in my tool box: KeePass HotKey is a wrapper utility to open a KeePass DB or trigger the Auto-Type Feature.

This utility is very specific to my use case:

  • I want to trigger one action from a dedicated hardware key on my keyboard
  • This action should either open a specific KeePass data base file, configured by the user, or
  • Trigger the “auto-type selected” feature of KeePass, if a KeePass instance is open, running, and has a selected entry.

You can find sourc code and released binaries at Github.

Today I present you the Checkouts Overview tool.

https://github.com/sgrottel/checkouts-overview
https://go.grottel.net/checkouts-overview

What? Why? Because this little tool helps me.

In my private setup, I have a lot of smaller repos checked out, and work on them only occasionally. In addition, I got several repos to collect the history of some text documents. Some of those repos are synced against servers which are only occasionally online, partially for power saving or partially due to VPN and network connectivity stuff. As a result, I often keep losing track of the sync states of all the different repos.

Is everything checked in? — most times, yes. If the change was complete.

Is everything pushed? — maybe.

Am I on branches? — no idea.

You might not need this app, if you have a better structured work process with your stuff than I do. I don’t, so I need help by a tool, by this tool.

If you are interested, you find more info in the app’s github repository.

Note: and as for the app’s icon, it’s about (repository) clones, right.

Redate is another tool in my growing toolbox. The idea is simple: many applications generate files, write files, update files, with exactly the same content as before. The file write date, of course, is updated. The content stays the same. Other tools, then again, us the file write date as indicate if the files have been changed. Which makes sense, right.

So, this little tool, “Redate,” stores the MD5 hashes of the files, and their original write dates. When the tool is then re-run on that list of files, it restores the original write dates for all files with unchanged MD5s. And, that’s it.

I use it for Vue.js projects, to keep the write dates of files in the dist folders. Then, a simple FTP-sync only needs to update changed files for the final deployment. This helps for projects with many unchanged assets.

You can grab source and binary releases from github.

This article details the work with inter-plugin dependencies in MegaMol™. The primary scenario is to have one (or many) plugins use Call classes exported by another plugin (and not the MegaMol™ core).

Actually, nothing was changed in the MegaMol™ core. Inter-plugin dependencies already worked, to some extent. The limits, and especially the behavior in error cases are described below.

Test Project: interplugin_test

The following project shows two plugins:

https://bitbucket.org/MegaMolDev/megamol_interplugin_test

Plugin A exports 2 modules and 1 call.

Plugin B exports one module using the call exported by modul A.

Exporting Call Header

Usually, modules and calls are only exported via their meta data description classes as part of the implementation in “plugin_instance”. This meta data is sufficient for the MegaMol™ core to instantiate the module at runtime. However, these classes are normally not exported by the plugins.

If you want to use a class, a Call, in other plugins at compile time, you have to export the type. You need to place the corresponding header file in the public export folder. For a typical case, have a look at the demo project: “./a/include/interplugin_test_a/IplgDemoCall.h” I also recommend to add the header files to the filter “Public Header Files” in the Visual Studio Solution Explorer.

Additionally, you need to export the class using the corresponding API macro: e.g. “INTERPLUGIN_TEST_A_API”, which is defined in the plugin’s primary header file, e.g. “interplugin_test_a/interplugin_test_a.h”

Of course, you still need to export the meta data description for the call itself in the “plugin_instance” definition (cf. “interplugin_test_a.cpp” line 51).

Using Call in another Plugin

Now, to use the Call in another Plugin, that one basically needs to link against the exporting Plugin. Following the demo project, the exporting plugin is called A and the using plugin in named B.

First A is a library and a normal development dependency for B. For Visual Studio, I recommend using the “configure.win.pl” and “ExtLibs.props.input” similar to any other 3rd-party library. In the demo project, the access to the subdirectory of A is hard coded in the “ExtLibs.props.input”. Then, you can use this user macro in the project settings:

  • C/C++ > General > Additional Include Directories
    • E.g. $(PluginAPath)include\
  • Linker > General > Additional Library Directories
    • E.g. $(PluginAPath)lib\$(PlatformName)\$(Configuration)\
  • Linker > Input > Additional Dependencies
    • E.g. interplugin_test_a.lib

As you see, the linker only uses the built input library of the plugin, not the compiled plugin A itself.

Now you can find the public header file of the call exported by the plugin, cf. “./b/src/IplgValueInvertB.cpp” line 3. Plugin B uses this class like any class exported from the core. Nothing special is required.

For Linux you could introduce similar settings in the “CMakeLists.txt” file. I did not.

Since I install everything of MegaMol™ into a local user directory (using the install prefix settings in the build scripts) all parts of MegaMol™ are available at the same location. This includes the core, which the “CMakeLists.txt” file already searches for and Plugin A, which is thus accidentally found too. So the public include files, which were copied when Plugin A was “installed” are already available. Linux does not require shared objects to link against their dependencies. The runtime loader is expected to resolve all issues. Thus, for building on Linux no additional settings are required.

If you want to be able to work with plugins not locally installed, you should write a corresponding find script for the base plugin A and use “find_package” in the “CMakeLists.txt” of plugin B. For now, I don’t care.

Configuring MegaMol™ with both Plugins

Just add both plugins, as usual, to the MegaMol™ configuration file. Either explicitly (recommended) or using file globing. You should keep the dependencies in mind. So you should load plugin A before you load plugin B. But technically, it does not matter.

Both plugins, all plugins, are basically Dlls. So if you load plugin B first, the Dll is loaded with all its dependencies. These include the Dll plugin A. So plugin A is in memory the, but the meta data is not known to the core yet. If then the command for loading plugin A is executed, the OS runtime is asked to load the Dll plugin A. Since that Dll is already in memory, the load succeeds and the plugins meta data is received and added to the core factory constructs.

Beware of Cyclic Dependencies

Cyclic Dependencies should be avoided, as always. In theory, they could work, like Plugin A uses a class from Plugin B and the other way round at the same time. However, if you are going towards such a scenario, I strongly recommend to create a third Plugin holding the shared classes, e.g. Calls.

Runtime Behavior if Dependent Plugin is not found

The OS runtime errors when a Dll (Plugin) cannot be loaded are rather useless. (I am working on something better here, but it is not as easy as one would think.) Usually, it only tells you that the Dll could not be loaded. So keep in mind, if you developing your plugins that other plugins you depend on, not only need to be available at compile time, but also need to be installed in the same bin directory for runtime access.

Version Number Tests

All plugins check their version numbers of Core and Vislib with the loading core. This is to avoid inconstancies at runtime. Imagine you build the Core, then work on your Plugin. In this work you stumble upon a bug in the Vislib and fix that. Then your core and your plugin use different Vislibs and only this very version check tells you that this is a bad idea.

Now, use that scenario and replace Vislib with plugin A. Welcome to hell. There currently is not version check implemented for cross-plugin dependencies. (Most likely, there will be no such check before the infamous coming “Call Interface Redesign”.) You need to be careful!

Simple computer graphics demos are often developed as console applications. Having the console window is simply convenient for debug output. However, if we then show these demos on our stereo powerwall, the console window flashing on program start is massively disturbing. That is why I take some time and wrote a little tool. It starts the console application, hides the console window, but captures the output. This way, we can still check what happened if something does not work.

I present the HiddenConsole:

HiddenConsole.zipHiddenConsole.zip Application starter hiding the console window
[55.3 KB; MD5: 848cbd8aa901fe38be8179d65b6d2162; More Info]

And, because I can, the source is freely available:

https://bitbucket.org/sgrottel-uni/hiddenconsole

We updated MegaMol to use cmake to build on Linux OS. This greatly improved the build process on Linux. But this also makes some more uncommon scenarios difficult to realize. For example, cmake usually automatically detects required dependencies. But, in some scenarios you need to override this magic.

In this article I show how to compile a second MegaMol on a system on which a MegaMol already has been compiled and installed. This is useful when working with experimental versions.

VISlib and visglut

First off you build the visglut the usual way. I assume here, that the installed MegaMol uses a different visglut as the one you want to build now:

mkdir megamol_x2
cd megamol_x2
svn co https://svn.vis.uni-stuttgart.de/utilities/visglut/tags/forMegaMol11 visglut
cd visglut/build_linux_make
make

If everything worked you can find the following files:

in megamol_x1/visglut/include:

GL/freeglut_ext.h
GL/freeglut.h
GL/freeglut_std.h
GL/glut.h
visglut.h
visglutversion.h

and in megamol_x2/visglut/lib:

libvisglut64.a
libvisglut64d.a

If so, let’s continue with the VISlib:

cd megamol_x2
svn co https://svn.vis.uni-stuttgart.de/utilities/vislib/tags/release_2_0 vislib
cd vislib

Now, there is the first action which is different from the default build process. As usual we will use the script cmake_build.sh. This script, however, per default registers the build directories in the cmake package registry. This enables cmake to find this package in its build trees. In this scenario, however, we do not want this special build to be automatically found, because we do not want to get in the way of our system-installed MegaMol. We thus deactivate the package registry.

This command configures and builds the VISlib, both for debug and release version:

./cmake_build.sh -dcmn

As always, if you encounter build problems due to the multi-job make, reduce the number of compile jobs:

./cmake_build.she -dcmnj 1

Note that I do not specify an install directory. I do not plan to install this special MegaMol. I just want to build, for example for a bug hunt.

MegaMolCore

It’s now time for the core.

cd megamol_x2
svn co https://svn.vis.uni-stuttgart.de/projects/megamol/core/branches/v1.1rc core
cd core

We first test the configuration by only configuring release and not building anything:

./cmake_build.sh -cv ../vislib -C -DCMAKE_DISABLE_FIND_PACKAGE_MPI=TRUE

Note that I also disabled MPI-Support here. The system I am building on has MPI installed, but I don’t want this MegaMol to use it.

The output should contain this line:

-- Found vislib: /home/sgrottel/megamol20150726/vislib/build.release/libvislib.a

This points to the right vislib, the one we specified. So all is well. We can build MegaMol, again without registering it’s build trees in the cmake package repository:

./cmake_build.sh -dcmnv ../vislib -C -DCMAKE_DISABLE_FIND_PACKAGE_MPI=TRUE

When all worked you got yourself the binaries:

megamol_x2/core/build.debug/libMegaMolCored.so
megamol_x2/core/build.release/libMegaMolCore.so

MegaMolConsole

Get yourself a working copy of the console:

cd megamol_x2
svn co https://svn.vis.uni-stuttgart.de/projects/megamol/frontends/console/branches/v1.1rc console
cd console

Again, we test if everything works by only configuring release and not building:

./cmake_build.sh -c -f ../core

The Console does not register its build tree per default, since no other project depends on the console. So we are fine here.

The output should contain these lines:

-- Looking for MegaMolCore with hints: ../core;../core/build.release;../core/share/cmake/MegaMolCore
-- Found MegaMolCore: /home/sgrottel/megamol20150726/core/build.release/libMegaMolCore.so
-- MegaMolCore suggests vislib at: /home/sgrottel/megamol20150726/vislib/build.release
-- MegaMolCore suggests install prefix: /usr/local
-- Using MegaMolCore install prefix
-- Found vislib: /home/sgrottel/megamol20150726/vislib/build.release/libvislib.a
-- Found AntTweakBar: /home/sgrottel/AntTweakBar/lib/libAntTweakBar.so
-- Found visglut: /home/sgrottel/megamol20150726/visglut/lib/libvisglut64.a

If the directories for other libraries are wrong, for example the AntTweakBar or the visglut use the cmake-typical DIR variable to give a search hint. Remember, relative paths might be confusion. Better use absolute paths. I don’t:

./cmake_build.sh -c -f ../core -C -Dvisglut_DIR=~/megamol20150709/visglut -C -DAntTweakBar_DIR=../../AntTweakBar

But in my case the cmake-magic worked fine in the first place. So, I configure both build types again:

./cmake_build.sh -dc -f ../core

Double check the output. Make sure the core, the vislib and the visglut are found in all the right places. If they are, built it:

./cmake_build.sh -dm

At this point you can quickly test your MegaMol. First open the megamol.cfg configuration file in a text editor and adjust the paths in there to yours. Then run MegaMol:

cd build.release
./MegaMolCon

If this seems ok, and if you have a local graphics card you can run the demo renderer:

./MegaMolCon -i demospheres s

Some MegaMol Plugin

Finally we need a plugin. I go for the mmstd_moldyn:

cd megamol_x2
svn co https://svn.vis.uni-stuttgart.de/projects/megamol/plugins/mmstd_moldyn/branches/v1.1rc mmstd_moldyn
cd mmstd_moldyn

The process is now exactly the same as with the console:

./cmake_build.sh -dcf ../core

The double check the directories for the core and the VISlib. If they are good, build the plugin:

./cmake_build.sh -dm

To test this plugin we go back to the console, and adjust the config file to load the plugin:

cd megamol_x2/console/build.release

Include the following lines in the config file. Obviously adjust the paths to what you need:

<plugin path="/home/dude/megamol_x2/mmstd_moldyn/build.release" name="mmstd_moldyn.mmplg" action="include" />
<shaderdir path="/home/dude/megamol_x2/mmstd_moldyn/Shaders" />

If you now run MegaMol it will try to load your plugin and will report it. The output console should contain something like:

200|Plugin mmstd_moldyn loaded: 11 Modules, 0 Calls
200|Plugin "mmstd_moldyn" (/home/sgrottel/megamol20150726/mmstd_moldyn/build.release/mmstd_moldyn.mmplg) loaded: 11 Modules, 0 Calls registered

And that’s it.