This article details the work with inter-plugin dependencies in MegaMol™. The primary scenario is to have one (or many) plugins use Call classes exported by another plugin (and not the MegaMol™ core).

Actually, nothing was changed in the MegaMol™ core. Inter-plugin dependencies already worked, to some extent. The limits, and especially the behavior in error cases are described below.

Test Project: interplugin_test

The following project shows two plugins:

https://bitbucket.org/MegaMolDev/megamol_interplugin_test

Plugin A exports 2 modules and 1 call.

Plugin B exports one module using the call exported by modul A.

Exporting Call Header

Usually, modules and calls are only exported via their meta data description classes as part of the implementation in “plugin_instance”. This meta data is sufficient for the MegaMol™ core to instantiate the module at runtime. However, these classes are normally not exported by the plugins.

If you want to use a class, a Call, in other plugins at compile time, you have to export the type. You need to place the corresponding header file in the public export folder. For a typical case, have a look at the demo project: “./a/include/interplugin_test_a/IplgDemoCall.h” I also recommend to add the header files to the filter “Public Header Files” in the Visual Studio Solution Explorer.

Additionally, you need to export the class using the corresponding API macro: e.g. “INTERPLUGIN_TEST_A_API”, which is defined in the plugin’s primary header file, e.g. “interplugin_test_a/interplugin_test_a.h”

Of course, you still need to export the meta data description for the call itself in the “plugin_instance” definition (cf. “interplugin_test_a.cpp” line 51).

Using Call in another Plugin

Now, to use the Call in another Plugin, that one basically needs to link against the exporting Plugin. Following the demo project, the exporting plugin is called A and the using plugin in named B.

First A is a library and a normal development dependency for B. For Visual Studio, I recommend using the “configure.win.pl” and “ExtLibs.props.input” similar to any other 3rd-party library. In the demo project, the access to the subdirectory of A is hard coded in the “ExtLibs.props.input”. Then, you can use this user macro in the project settings:

  • C/C++ > General > Additional Include Directories
    • E.g. $(PluginAPath)include\
  • Linker > General > Additional Library Directories
    • E.g. $(PluginAPath)lib\$(PlatformName)\$(Configuration)\
  • Linker > Input > Additional Dependencies
    • E.g. interplugin_test_a.lib

As you see, the linker only uses the built input library of the plugin, not the compiled plugin A itself.

Now you can find the public header file of the call exported by the plugin, cf. “./b/src/IplgValueInvertB.cpp” line 3. Plugin B uses this class like any class exported from the core. Nothing special is required.

For Linux you could introduce similar settings in the “CMakeLists.txt” file. I did not.

Since I install everything of MegaMol™ into a local user directory (using the install prefix settings in the build scripts) all parts of MegaMol™ are available at the same location. This includes the core, which the “CMakeLists.txt” file already searches for and Plugin A, which is thus accidentally found too. So the public include files, which were copied when Plugin A was “installed” are already available. Linux does not require shared objects to link against their dependencies. The runtime loader is expected to resolve all issues. Thus, for building on Linux no additional settings are required.

If you want to be able to work with plugins not locally installed, you should write a corresponding find script for the base plugin A and use “find_package” in the “CMakeLists.txt” of plugin B. For now, I don’t care.

Configuring MegaMol™ with both Plugins

Just add both plugins, as usual, to the MegaMol™ configuration file. Either explicitly (recommended) or using file globing. You should keep the dependencies in mind. So you should load plugin A before you load plugin B. But technically, it does not matter.

Both plugins, all plugins, are basically Dlls. So if you load plugin B first, the Dll is loaded with all its dependencies. These include the Dll plugin A. So plugin A is in memory the, but the meta data is not known to the core yet. If then the command for loading plugin A is executed, the OS runtime is asked to load the Dll plugin A. Since that Dll is already in memory, the load succeeds and the plugins meta data is received and added to the core factory constructs.

Beware of Cyclic Dependencies

Cyclic Dependencies should be avoided, as always. In theory, they could work, like Plugin A uses a class from Plugin B and the other way round at the same time. However, if you are going towards such a scenario, I strongly recommend to create a third Plugin holding the shared classes, e.g. Calls.

Runtime Behavior if Dependent Plugin is not found

The OS runtime errors when a Dll (Plugin) cannot be loaded are rather useless. (I am working on something better here, but it is not as easy as one would think.) Usually, it only tells you that the Dll could not be loaded. So keep in mind, if you developing your plugins that other plugins you depend on, not only need to be available at compile time, but also need to be installed in the same bin directory for runtime access.

Version Number Tests

All plugins check their version numbers of Core and Vislib with the loading core. This is to avoid inconstancies at runtime. Imagine you build the Core, then work on your Plugin. In this work you stumble upon a bug in the Vislib and fix that. Then your core and your plugin use different Vislibs and only this very version check tells you that this is a bad idea.

Now, use that scenario and replace Vislib with plugin A. Welcome to hell. There currently is not version check implemented for cross-plugin dependencies. (Most likely, there will be no such check before the infamous coming “Call Interface Redesign”.) You need to be careful!

Tagged with:

Simple computer graphics demos are often developed as console applications. Having the console window is simply convenient for debug output. However, if we then show these demos on our stereo powerwall, the console window flashing on program start is massively disturbing. That is why I take some time and wrote a little tool. It starts the console application, hides the console window, but captures the output. This way, we can still check what happened if something does not work.

I present the HiddenConsole:

HiddenConsole.zipHiddenConsole.zip Application starter hiding the console window
[55.3 KB; MD5: 848cbd8aa901fe38be8179d65b6d2162; More Info]

And, because I can, the source is freely available:

https://bitbucket.org/sgrottel-uni/hiddenconsole

Tagged with: ,

We updated MegaMol to use cmake to build on Linux OS. This greatly improved the build process on Linux. But this also makes some more uncommon scenarios difficult to realize. For example, cmake usually automatically detects required dependencies. But, in some scenarios you need to override this magic.

In this article I show how to compile a second MegaMol on a system on which a MegaMol already has been compiled and installed. This is useful when working with experimental versions.

VISlib and visglut

First off you build the visglut the usual way. I assume here, that the installed MegaMol uses a different visglut as the one you want to build now:

mkdir megamol_x2
cd megamol_x2
svn co https://svn.vis.uni-stuttgart.de/utilities/visglut/tags/forMegaMol11 visglut
cd visglut/build_linux_make
make

If everything worked you can find the following files:

in megamol_x1/visglut/include:

GL/freeglut_ext.h
GL/freeglut.h
GL/freeglut_std.h
GL/glut.h
visglut.h
visglutversion.h

and in megamol_x2/visglut/lib:

libvisglut64.a
libvisglut64d.a

If so, let’s continue with the VISlib:

cd megamol_x2
svn co https://svn.vis.uni-stuttgart.de/utilities/vislib/tags/release_2_0 vislib
cd vislib

Now, there is the first action which is different from the default build process. As usual we will use the script cmake_build.sh. This script, however, per default registers the build directories in the cmake package registry. This enables cmake to find this package in its build trees. In this scenario, however, we do not want this special build to be automatically found, because we do not want to get in the way of our system-installed MegaMol. We thus deactivate the package registry.

This command configures and builds the VISlib, both for debug and release version:

./cmake_build.sh -dcmn

As always, if you encounter build problems due to the multi-job make, reduce the number of compile jobs:

./cmake_build.she -dcmnj 1

Note that I do not specify an install directory. I do not plan to install this special MegaMol. I just want to build, for example for a bug hunt.

MegaMolCore

It’s now time for the core.

cd megamol_x2
svn co https://svn.vis.uni-stuttgart.de/projects/megamol/core/branches/v1.1rc core
cd core

We first test the configuration by only configuring release and not building anything:

./cmake_build.sh -cv ../vislib -C -DCMAKE_DISABLE_FIND_PACKAGE_MPI=TRUE

Note that I also disabled MPI-Support here. The system I am building on has MPI installed, but I don’t want this MegaMol to use it.

The output should contain this line:

-- Found vislib: /home/sgrottel/megamol20150726/vislib/build.release/libvislib.a

This points to the right vislib, the one we specified. So all is well. We can build MegaMol, again without registering it’s build trees in the cmake package repository:

./cmake_build.sh -dcmnv ../vislib -C -DCMAKE_DISABLE_FIND_PACKAGE_MPI=TRUE

When all worked you got yourself the binaries:

megamol_x2/core/build.debug/libMegaMolCored.so
megamol_x2/core/build.release/libMegaMolCore.so

MegaMolConsole

Get yourself a working copy of the console:

cd megamol_x2
svn co https://svn.vis.uni-stuttgart.de/projects/megamol/frontends/console/branches/v1.1rc console
cd console

Again, we test if everything works by only configuring release and not building:

./cmake_build.sh -c -f ../core

The Console does not register its build tree per default, since no other project depends on the console. So we are fine here.

The output should contain these lines:

-- Looking for MegaMolCore with hints: ../core;../core/build.release;../core/share/cmake/MegaMolCore
-- Found MegaMolCore: /home/sgrottel/megamol20150726/core/build.release/libMegaMolCore.so
-- MegaMolCore suggests vislib at: /home/sgrottel/megamol20150726/vislib/build.release
-- MegaMolCore suggests install prefix: /usr/local
-- Using MegaMolCore install prefix
-- Found vislib: /home/sgrottel/megamol20150726/vislib/build.release/libvislib.a
-- Found AntTweakBar: /home/sgrottel/AntTweakBar/lib/libAntTweakBar.so
-- Found visglut: /home/sgrottel/megamol20150726/visglut/lib/libvisglut64.a

If the directories for other libraries are wrong, for example the AntTweakBar or the visglut use the cmake-typical DIR variable to give a search hint. Remember, relative paths might be confusion. Better use absolute paths. I don’t:

./cmake_build.sh -c -f ../core -C -Dvisglut_DIR=~/megamol20150709/visglut -C -DAntTweakBar_DIR=../../AntTweakBar

But in my case the cmake-magic worked fine in the first place. So, I configure both build types again:

./cmake_build.sh -dc -f ../core

Double check the output. Make sure the core, the vislib and the visglut are found in all the right places. If they are, built it:

./cmake_build.sh -dm

At this point you can quickly test your MegaMol. First open the megamol.cfg configuration file in a text editor and adjust the paths in there to yours. Then run MegaMol:

cd build.release
./MegaMolCon

If this seems ok, and if you have a local graphics card you can run the demo renderer:

./MegaMolCon -i demospheres s

Some MegaMol Plugin

Finally we need a plugin. I go for the mmstd_moldyn:

cd megamol_x2
svn co https://svn.vis.uni-stuttgart.de/projects/megamol/plugins/mmstd_moldyn/branches/v1.1rc mmstd_moldyn
cd mmstd_moldyn

The process is now exactly the same as with the console:

./cmake_build.sh -dcf ../core

The double check the directories for the core and the VISlib. If they are good, build the plugin:

./cmake_build.sh -dm

To test this plugin we go back to the console, and adjust the config file to load the plugin:

cd megamol_x2/console/build.release

Include the following lines in the config file. Obviously adjust the paths to what you need:

<plugin path="/home/dude/megamol_x2/mmstd_moldyn/build.release" name="mmstd_moldyn.mmplg" action="include" />
<shaderdir path="/home/dude/megamol_x2/mmstd_moldyn/Shaders" />

If you now run MegaMol it will try to load your plugin and will report it. The output console should contain something like:

200|Plugin mmstd_moldyn loaded: 11 Modules, 0 Calls
200|Plugin "mmstd_moldyn" (/home/sgrottel/megamol20150726/mmstd_moldyn/build.release/mmstd_moldyn.mmplg) loaded: 11 Modules, 0 Calls registered

And that’s it.

Tagged with:

NuGet is a handy package system for Visual Studio. Originally meant for Dotnet libraries it was extended some time to support native C++ projects as well. The only catch is, that the packages need to be made especially for NuGet. There are some, but by far not all which one (I) needs for the daily work. Most problematic is that many packages do not support Visual Studio 2013. And that is even today, as Visual Studio 2015 is almost here.

So there is only one thing to do: join the fray! Well, I could stay out of it and continue to grumble about it, but let’s just keep that as plan b. So, here it is; my first NuGet package:

the AntTweakBar v1.16

(with kind support of the author)

And this will not be the last you’ve seen of me.

Tagged with: ,

Today I am presenting another small tool of mine: the ShutdownPlannerGUI

ShutdownPlannerGUI.zipShutdownPlannerGUI.zip Simple GUI for planned Shutdowns of MS Windows
[188 KB; MD5: 45cb64eef13ea47e98a7dcde0773e6f1; More Info]

The basic idea is simple: it is a small GUI, slapped together in C#, around the Shutdown command-line utility. It is about the timer, specifying when the system is going to shut down. The GUI provides several text boxes to conveniently enter the time in hours, minutes and seconds. And that is it.

Tagged with: ,

Once again, it is time for one of my little tools, which the world does not need (but I do). The idea is simple: think of a series of files in one directory, e.g. music files of a audio book or video files of a TV series. Every once in a while you watch/listen to one of the files and days later you do not remember, which was the last file you have seen. My tool registers to the context menu of the Windows Explorer and provides a simply way of setting a bookmark at the file in the directory. The bookmark is an empty file with the same name, additionally using an extra file name extension). The whole thing is no shell extension, but a simple, normal DotNet application which writes to the right places in the registry. Simple, not elegant, but working.

FileBookmark.zipFileBookmark.zip File Bookmark Utility
[91.2 KB; MD5: bd58a615775c9897ae82536bb678b05b; More Info]

And, just because I can, here is the source code::

FileBookmark_src.zipFileBookmark_src.zip File Bookmark Utility Source Code
[60.2 KB; MD5: 84071f778ccb81b0c39101577a3fa204; More Info]

Tagged with: ,

The maybe most important coding project of myself which has impact on my private programming as well as on my work is thelib_icon16 TheLib. The basic idea is to collect all classes which we (two friends and myself) wrote and used several times in several different projects over and over again. These classes usually are wrappers for compatibility or convenience around API calls or library calls (e.g. STL, Boost, whatever). That’s where the name of our lib cames from: Totally Helpful Extensions. And, it is just cool to write: #include "the/exception.h".

However, I hear very often: “Why do you write a lib? There are plenty already for all tasks.”

If that would be true, none of us would write programs anymore and we would only “compose” programs from libs. Well, we don’t. Or, rather I don’t. Meaning: TheLib really is helpful. It is not a replacement for the other libs. It’s a complement, and extension.

On Example: Strings!

The string functionality in TheLib is not nearly as powerful as one would required to write a fully fledged text processor. This is not the goal of TheLib. We wrote these functions to provide somewhat beyond basic functionality. The idea is to enable simple applications or prototypical applications to easily implement nice-to-use interfaces for the user.

Especially unter Linux (but also unter Windows) there are usually a total of three different types of strings:

  1. char * or std::string which store ASCII or ANSI strings with locale dependent character sets
  2. char * or std::string which store multi-byte strings, e.g. using UTF-8 encoding
  3. wchar_t * or std::wstring which store unicode strings.

Depending on these types different API functions need to be called, e.g. for determining the length of the string:

  1. strlen
  2. multiple calls of mbrlen
  3. wcslen

On issue that arises between case 1 and 2 is that modern Linux often uses a locale which stores UTF-8 strings within the standard strings. As long as strings are only to be writte, stored, and displayed, this is a great way to maintain compatibility and gain the modern feature of special character availability. However, as soon as to perform a more complex operation (like creating a substring) this approach results in unexpected behaviour als the bytes of a single multi-byte character are threated like independent characters.

Example:

  • Your user is a geek and enters “あlptraum” as input string.
  • This string is stored in std::string using the utf8-en encoding.
  • Your application now wants to extract the first character for some reason (e.g. to produce typographic capitalization using a specialized font).
  • The normal way of doing this is accessing char* first_char = s[0]; and std::string remaining = s.substr(1);
  • Because the japanese “あ” uses two bytes, this results in: “0” + “Blptraum”

This not only applies to japanese characters, but obviously to almost all characters with diacritics. What is even more important: this issue also results in unexpected behaviour when using (or implementing) string operations which ignore case, e.g. comparisons.

Example of changing a string to lower case:

// we will do this the STL-way:
// http://notfaq.wordpress.com/2007/08/04/cc-convert-string-to-upperlower-case/

std::string data;
// contains 'data' is set to "あlptraum" encoded with utf8-en locale

std::transform(data.begin(), data.end(), data.begin(), ::tolower);
// well, content of 'data' is now: "0blptraum"
// ...

To avoid this problem, TheLib internally initializes the system locale for the application and detects if the locale uses UTF-8 encoding. If it does, all TheLib string functions will call the multi-byte API functions to work as expected. In addition TheLib provides some functions to explicitly convert from or to UTF-8 strings (e.g. for file io).

Of course, you don’t need TheLib to do this. You can use another lib (probably. I only know the IBM-Unicode-Lib, which seems like a huge hulk) or you can use your own workarounds or you can ignore such problems as “they will not occure in your application scenarios”. However, having TheLib doing the job is just handy. Nothing more.

Tagged with:

Software should solve problems. Sometimes this is the case.

I had a problem:

I have a somewhat older convertible laptop, an ASUS Aspire 1820PT. A nice and cheap convertible of it’s time. With touch screen support for up to two fingers and with an acceptable computational power. I have upgraded it in the meantime with an SSD and I am now running Windows 8. So far so good. The problem, however, is that the tilt sensor is no longer supported by Windows 8. 🙁

So I needed a solution. Hacking drivers or even writing drivers myself is not up my alley. I am an application developer. But, if something does not work automatically (anymore), we just need to make the manual use as comfortable as possible. That’s why I wrote a tiny tool: the DisplayRotator.

The idea is simple: the tool is attached to the taskbar. As soon as it is started it shows DisplayRotatorScreena simple window with four buttons for the four possible display rotation settings. Press one of these buttons and the display settings are changed accordingly. With this, I can setup my desktop orientation of my convertible with two clicks, even two tapps with my finger, and rotate the desktop aynway I like.

DisplayRotator.zipDisplayRotator.zip Display Rotation Tool
[152 KB; MD5: 07c3efddd05a98bf4d02db595b87f2fe; More Info]

And, because I can, the zip also contains the source code of the tool. It is written in C# and naturally uses the Windows API to change the display settings. Nice and easy. With the same code basis all display settings can be changes, like screen resolution and refresh rate. Even detaching or attaching monitors to the desktop is possible. Ok, the code for these functions is not in the tool, but the API calls are the same.

Maybe the tool can be of use to someone else too.

Tagged with: ,

The Windows Powershell is quite nice. Of course, now all the Linux users start bitching around, that they always has something like this and that it is nothing special. And no one claims otherwise. But still, the PowerShell is nice and I enjoy it. 🙂

Like today: I needed a simple hex dump of a file:

PS >  $str = ""; $cnt = 0; get-content -encoding byte C:\pfad\zur\Datei.txt | foreach-object { $str += (" {0:x2}" -f $_); if ($cnt++ -eq 7) { $cnt = 0; $str += "`n"; } }; write-host $str

It’s elegance is limited, that I will admit, however, I does it’s job. And, somehow, it is quite nice …

Tagged with: