Last week I updated my computer to Windows 8.1. The improved Skydrive integration is still completely useless, but luckily there are thrid-party tools for the job.

Whatever. Then I were copying some files onto my backup server and the copying dialog windows offended me with 2 BK/s. To make things clear: we are talking about 5 files, more than 500 MB each, altogether a bit over 18 GB. Thus, it is not the “small files are slow” issue here. After about half a day of checking the configurations of my server and my other computers and semi-guided googeling I finally found the answer:

Large Send Offload v2 (IPv4) – Disable
Large Send Offload v2 (IPv6) – Disable

The thing of this whole story which is really frustrating me, is that I knew about this. I already knew that these settings can be the reason for rancid network performance. *gnampf* It seems I am getting old.

The lecture period started this week.

This semester, together with a colleague, I am giving the lecture on “Scientific Visualization”. I am looking forward to it. However, this, of course, also means a lot of work. Mainly revising and updating the lecture slices and exercises. Therefore, I hardly had time to do anything else. Well, this is also part of my job.

The image of the article, by the way, shows a part of the “visible human” data set using direct volume rendering

I see myself am a Windows fanboy and an early adopter. I really do. (Don’t mistake me for a Microsoft fanboy. That is something completely different.)

This week, by chance, I found Windows 8.1 on my DreamSpark account. I was smart enought to ignore the preview so far, thus I was able to continue working, but since neither the title nor the description of the Dreamspark download mentioned anything about being a preview, I thought, this might be the real thing, released a bit early to Dreamspark users, like Microsoft sometimes does. That said, I downloaded it and installed Windows 8.1 on my laptop.

To spoiler: Right now, while I am writing this article, I am performing a clean installation on Windows 8(.0) on my laptop.

When I switched from Windows 7 to Windows 8, I quickly got used to the new interface, and I liked many of the new features. Of course, a start button would be nice and the start screen (aka full screen start menu) has a strange way of (un-)organization, but I was able to work rather efficiently with all of it.

Then I tried Windows 8.1 … I don’t want to write “yet another flame-post”, therefore I will limit myself to just one single feature: local accounts.

I have a local PC (stands for “personal” computer). I impersonate several roles when using this PC, e.g. being myself when I do online shopping, being my working self when answering e-mails from one of my mail accounts (I currently use 5 different accounts), being my gaming self when I do online gaming, etc. I could continue this list. What is important here, is that I fullfil different roles when using my computer, but I am always myself. Thus I use my account to do all this. And my account is my local account and not a Microsoft account forible tied with a Microsoft e-mail address.

I don’t want to use a Microsoft account on my local personal computer.

I don’t want to use a Microsoft account on my local personal computer.

I don’t want to use a Microsoft account on my local personal computer.

So, after some hardship I finally managed to use a local user account in windows 8.1 as well, but then suddenly I was no longer able to use several features of Windows. Several Apps stopped working. Yes, they state that I can log into my Microsoft account independently, but with some of the Apps, this really does not work.

More importantly: SkyDrive cannot be used with a local user account.

At least, I was not able to find any work-around to get any SkyDrive integration installed or working. This is a true show-stopper. The improved SkyDrive integration ensures that I will always have enough storage space on my cloud, since I cannot use it any more. Sad. Truely sad.

Well, as stated above, I am switching back to Windows 8(.0) and I will suppress my early-adoper-urges until I get proof that local accounts are again useable with the next windows.

If you want to read more about this misery, I liked reading this article: http://www.tweakguides.com/Windows81_1.html (especially page 3).

The maybe most important coding project of myself which has impact on my private programming as well as on my work is thelib_icon16 TheLib. The basic idea is to collect all classes which we (two friends and myself) wrote and used several times in several different projects over and over again. These classes usually are wrappers for compatibility or convenience around API calls or library calls (e.g. STL, Boost, whatever). That’s where the name of our lib cames from: Totally Helpful Extensions. And, it is just cool to write: #include "the/exception.h".

However, I hear very often: “Why do you write a lib? There are plenty already for all tasks.”

If that would be true, none of us would write programs anymore and we would only “compose” programs from libs. Well, we don’t. Or, rather I don’t. Meaning: TheLib really is helpful. It is not a replacement for the other libs. It’s a complement, and extension.

On Example: Strings!

The string functionality in TheLib is not nearly as powerful as one would required to write a fully fledged text processor. This is not the goal of TheLib. We wrote these functions to provide somewhat beyond basic functionality. The idea is to enable simple applications or prototypical applications to easily implement nice-to-use interfaces for the user.

Especially unter Linux (but also unter Windows) there are usually a total of three different types of strings:

  1. char * or std::string which store ASCII or ANSI strings with locale dependent character sets
  2. char * or std::string which store multi-byte strings, e.g. using UTF-8 encoding
  3. wchar_t * or std::wstring which store unicode strings.

Depending on these types different API functions need to be called, e.g. for determining the length of the string:

  1. strlen
  2. multiple calls of mbrlen
  3. wcslen

On issue that arises between case 1 and 2 is that modern Linux often uses a locale which stores UTF-8 strings within the standard strings. As long as strings are only to be writte, stored, and displayed, this is a great way to maintain compatibility and gain the modern feature of special character availability. However, as soon as to perform a more complex operation (like creating a substring) this approach results in unexpected behaviour als the bytes of a single multi-byte character are threated like independent characters.

Example:

  • Your user is a geek and enters “あlptraum” as input string.
  • This string is stored in std::string using the utf8-en encoding.
  • Your application now wants to extract the first character for some reason (e.g. to produce typographic capitalization using a specialized font).
  • The normal way of doing this is accessing char* first_char = s[0]; and std::string remaining = s.substr(1);
  • Because the japanese “あ” uses two bytes, this results in: “0” + “Blptraum”

This not only applies to japanese characters, but obviously to almost all characters with diacritics. What is even more important: this issue also results in unexpected behaviour when using (or implementing) string operations which ignore case, e.g. comparisons.

Example of changing a string to lower case:

// we will do this the STL-way:
// http://notfaq.wordpress.com/2007/08/04/cc-convert-string-to-upperlower-case/

std::string data;
// contains 'data' is set to "あlptraum" encoded with utf8-en locale

std::transform(data.begin(), data.end(), data.begin(), ::tolower);
// well, content of 'data' is now: "0blptraum"
// ...

To avoid this problem, TheLib internally initializes the system locale for the application and detects if the locale uses UTF-8 encoding. If it does, all TheLib string functions will call the multi-byte API functions to work as expected. In addition TheLib provides some functions to explicitly convert from or to UTF-8 strings (e.g. for file io).

Of course, you don’t need TheLib to do this. You can use another lib (probably. I only know the IBM-Unicode-Lib, which seems like a huge hulk) or you can use your own workarounds or you can ignore such problems as “they will not occure in your application scenarios”. However, having TheLib doing the job is just handy. Nothing more.

Software should solve problems. Sometimes this is the case.

I had a problem:

I have a somewhat older convertible laptop, an ASUS Aspire 1820PT. A nice and cheap convertible of it’s time. With touch screen support for up to two fingers and with an acceptable computational power. I have upgraded it in the meantime with an SSD and I am now running Windows 8. So far so good. The problem, however, is that the tilt sensor is no longer supported by Windows 8. :-(

So I needed a solution. Hacking drivers or even writing drivers myself is not up my alley. I am an application developer. But, if something does not work automatically (anymore), we just need to make the manual use as comfortable as possible. That’s why I wrote a tiny tool: the DisplayRotator.

The idea is simple: the tool is attached to the taskbar. As soon as it is started it shows DisplayRotatorScreena simple window with four buttons for the four possible display rotation settings. Press one of these buttons and the display settings are changed accordingly. With this, I can setup my desktop orientation of my convertible with two clicks, even two tapps with my finger, and rotate the desktop aynway I like.

DisplayRotator.zipDisplayRotator.zip Display Rotation Tool
[152 KB; MD5: 07c3efddd05a98bf4d02db595b87f2fe; More Info]

And, because I can, the zip also contains the source code of the tool. It is written in C# and naturally uses the Windows API to change the display settings. Nice and easy. With the same code basis all display settings can be changes, like screen resolution and refresh rate. Even detaching or attaching monitors to the desktop is possible. Ok, the code for these functions is not in the tool, but the API calls are the same.

Maybe the tool can be of use to someone else too.

I like working at a university. I like solving problems without known solutions. I like improving existing solutions. I like working in my own directions. I like working with students. I like advising students with their work. I even like giving lectures.

However, what I don’t like, at least currently, is writing publications of my scientific results, which, of course, are the primary part in preparing a scientific career. This process ist currently awful, tiring and frustrating.

This year I wrote six articles. Each time I invested much work into preparation and presentation. However, *none* of them will be published this year. Not a single one. I never had such a bad year before. And from a realistic point-of-view, this is like a death blow for an accademic career.

Of course, this is not nice in itself, but what is even more frustrating are the reviews with which my articles got rejected. One of my papers hat numeric scores (1 = worst, 5 = best score) of: 4 + 4 + 3 +3. Also, the reviewer’s comments are sadly often nonprofessional, useless, and even polemic. Another paper was loved by two of the four reviewers. The third one found it borderline, but found it could be improved. The forth and primary one, however, worte something which I can only understand as “I don’t like it”. His review had more text but not much more contant than that. Result: rejected.

I had some discussions about this issue of an overcritical reviewing with several professors. All were aware of this problem and all agreed that this happens once in a while to a community. Sadly, in my opinion it has gotten continuously worse in the last seven years I have been working and publishing (except for this year) in the visualization community. Honestly, I don’t known what will be …

Not all points are equal. There are always fundamental misconceptions about the type of data I am working with in my visualization research.

I work with particle data. This data is usually the result of simulations, e.g. generated through the methods of molecular dynamics or a discrete element method. The individual particles represent independent elements, e.g. atoms or mass centers, which are neither connected nor correlated. Of course, within the simulation these particles interact and influence each other, but with the pure data which is available for visualization to me, there is no topological structure between the particles at all. Literature was several further names for this kind of data: point-based data, mesh-less data, and, maybe the best fitting one, scattered data. Technically speaking, these data sets are arbitrarily sorted lists of elements, each storing a position and optional additional attributes, like a sphere radius or a color. But that is it. There are no more information and in general you cannot make any assumptions about structures within the data.

bunnyPSP-OSSI now want to write about one very common misconception: there is the further data type of point clouds also called point-set surfaces. These data also consist of a list of points, which are at first not correlated. Typical sources of such data are point-based modeling in 3D computer graphics and, more common, scanning of real-world objects. Such scans would be created by laser scanners or structured-light scanners like the Kinect. Because of the fact that these data sets store a simple list of points, i.e. they are technically identical to particle data, results in the misconception of many people that these two types of data sets are identical. They are not.

Point clouds are discrete samplings of a continuous function, i.e. the surface of the scanned or modeled object. Therefore, all points reside, within some error margin, on this 2D surface embedded within the 3D space. This aspect is fundamentally different from particle data, in which the particles are freely placed throughout the 3D space. Almost every algorithm working with point-cloud data work with the assumption of the continuous surface represented by the points. As this assumption is not valid for particle data, these algorithms cannot be applied easily to this kind of data.

Well, obviously I have not published enough to make my colleagues in my field of science recognizes this difference. I am off then …

For several years now, I write my smaller tools in C#. I like the language and the runtime framework is powerful. For that matter I do not care about platform independance. Or for Mono. I am a fan of Windows Forms. It is a nice and capable GUI tool kit, close to the traditional c++ world, but with good abstractions at the right places. And then, there is WPF.

I do not know what to think about WPF.

On the one hand, Microsoft claims WPF to be the future. Many new functions are introduced (first or only) to WPF. Windows 8 Apps and Windows Phone Apps require the programmer to use WPF for the GUI. The data binding is nicely done and cleanly integrated into the language, compared to Windows Forms.

On the other hand, WPF is not a perfect solution. The performance is not great, if the GUI gets complex or heavily modified. The editor integrated in Visual Studio is not very good. (Okey, there is Expression blend, but I do not like to use two editors for one project at the same time.)

What realy bothers me, however, is that even Microsoft has no clear policy concerning this issue. Visual Studio 2012 is written using WPF and this is the required GUI toolkit for Apps. However, for example, the new Office, although sharing the new Metro design, does not use WPF, but uses classical GUI toolkits recreating the same look-and-feel. All in all, I just do not know which one to bet on, Windows Forms or WPF. Probably, it really does not matter at all. But, somehow, I do not like the current situation.

This was the last week of the lecture periode of this sommer semester. As research staff of the university, I, of course, do not hear lectures nor do I have to study for exams. Been there, done that. But, the lecture periode is important for my daily work. I supervise students, in lectures, in exercise courses, with their bachelor, master, or diploma theses. But now, in the lecture-free time, there is less of all of that. Therefore, traditionally, now also starts the vacation season for the university staff. I will continue working hard for two more weeks, to push some of my projects forward, mainly VICCI and MegaMol. But after that, it is time for a leave. I am looking forward to it.