dmitryK's blog

Tuesday, August 13, 2019

Krita 2019 Sprint: Animation and Workflow BoF

Last week we had a huge Krita Sprint in Deventer. A detailed report is written by Boudewijn here, and I will concentrate on the Animation and Workflow discussion we had on Tuesday, when Boudewijn was away, meeting and managing people arriving. The discussion was centered around Steven and his workflow, but other people joined during the discussion: Noemie, Scott, Raghavendra and Jouni.

(Eternal) Eraser problem

Steven brought up a point that current brush options "Eraser Switch Size" and "Eraser switch Opacity" are buggy, so it winded up an old topic again. These options were always considered as a workaround for people who need a distinct eraser tool/brush tip, and they were always difficult to maintain.

After a long discussion with broader circle of people we concluded that "Ten Brushes Plugin" can be used as an alternative for a separate eraser tool. One should just assign some eraser-behaving preset to the 'E' key using this plugin. So we decided that we need the following steps:

Proposed solution:

  1. Ten Brushes Plugin should have some eraser preset configured by default
  2. This eraser preset should be assigned to "Shift+E" by default. So when people ask about "Eraser Tool" we could just tell them "please use Shift+E".
  3. [BUG] Ten brushes plugin doesn't reset back to a normal brush when the user picks/changes painting color, like normal eraser mode does.
  4. [BUG] Brush slot numbering is done in 1,2,3,...,0 order, which is not obvious. It should be 0,1,2,...,9 instead.
  5. [BUG] It is not possible to set up a shortcut to the brush preset right in the Ten Brushes Plugin itself. The user should go to the settings dialog.

Stabilizer workflow issues

In Krita stabilizer settings are global. That is, they are applied to whatever brush preset you use at the moment. That is very inconvenient, e.g. when you do precise line art. If you switch to a big eraser to fix up the line, you don't need the same stabilization as in the liner.

Proposed solution:

  1. The stabilizer setting are still in the Tool Options docker, we don't move them into the Brush Settings (because sometimes you need to make them global?)
  2. Brush Preset should have a checkbox "Save Stabilizer Settings" that will load/save the stabilizer settings when the preset is selected/unselected.
  3. The editing of these (basically) brush-based setting will happen in the tool option.

Questions:

  • I'm not sure if the last point is sane. Technically, we can move the stabilizer settings into the brush preset. And if the user wants to use the same stabilizer settings in different presets, he can just lock the corresponding brush settings (we have an special lock icon for that). So should we move the stabilizer settings into the brush preset editor or keep it in the tool options?

Cut Brush feature

Sometimes painter need a lot of stamps for often-used objects. E.g. a head or a leg for an animation character. A lot of painters use brush preset selector as a storage for that. That is, if you need a copy of a head on another frame, just select the preset and click in a proper position. We already have stamp brushes and they work quite well, we just need streamline workflow a bit.

Proposed solution:

  1. Add a shortcut for converting the current selection into a brush. It should in particular:
    • create a brush from the current selection, add default name to it and create an icon from the selection itself
    • deselect the current selection. It is needed to ensure that the user can paint right after pressing this shortcut
  2. There should be shortcuts to rotate, scale current brush
  3. There should be a shortcut for switching prev/next dab of the animated brush
  4. Brush needs a special outline mode, when it paints not an outline, but a full colorful preview. It should be activated by some modifier (that is pres+hold).
  5. Ideally, if multiple frames are selected, the created brush should become animated. That would allow people to create "walking brush" or "raining brush".

Multiframe editing mode

One of the major things Krita's animation still lack is multiframe editing mode, that is ability to transform/edit multiple frames at once. We discussed it and ended up with a list of requirements.

Proposed solution:

  1. By default all the editing tools transform the current frame only
  2. The only exception is "Image" operations, which operate on the entire image, e.g. scale, rotate, change color space. These operations work on all existing frames.
  3. If there is more than one frame selected in the timeline, then operation/tool should be applied on these frames only.
  4. We need a shortcut/action in the frame's (or timeline's layer) context menu: "Selection all frames"
  5. Tools/Actions that should support multiframe operations:
    • Brush Tool (low-priority)
    • Move Tool
    • Transform Tool
    • Fill Tool (may be efficiently used on multiple frames with erase-mode-trick)
    • Filters
    • Copy-Paste selection (now we can only copy-paste frames, not selections)
    • Fill with Color/Pattern/Clear

BUGS

There is also a set of unsorted bugs that we found out during the discussion:
  1. On Windows multiple main windows don't have unique identifier, so they are no distinguishable from OBS.
  2. Animated brush spits a lot of dabs in the beginning of the stroke
  3. Show in Timeline should be default for all the new layers
  4. Fill Tool is broken with Onion Skins (BUG:405753)
  5. Transform Tool is broken with Onion Skins (BUG:408152)
  6. Move Tool is broken with Onion Skins (BUG:392557)
  7. When copy-paste frames on the timeline, in-betweens should override the destination (and technically remove everything that was in the destination position). Right now source and destination keyframes are merged. That is not what animators expect.
  8. Changing "End" of animation in "Animation" docker doesn't update timeline's scroll area. You need to create a new layer to update it.
  9. Delayed Save dialog doesn't show the name of the stroke that delays it (and sometimes the progress bar as well). It used to work, but now is broken.
  10. [WISH] We need "Insert pasted frames", which will not override destination, but just offset it to the right.
  11. [WISH] Filters need better progress reporting
  12. [WISH] Auto-change the background of the Text Edit Dialog, when the text color looks alike.

As a conclusion, it was very nice to be at the sprint and to be able to talk to real painters! Face to face meetings are really important for getting such detailed lists of new features we need to implement. If we did this discussion through Phabricator we would spend weeks on it :)



Tuesday, November 20, 2018

Krita Fall 2018 Sprint Results: HDR support for Krita and Qt!

In October we held a Krita developers' sprint in Deventer. One of my goals for the sprint was to start implementing High Dynamic Range (HDR) display support for Krita. Now almost a month have passed and I am finally ready to publish some preliminary results of what I started during the sprint.

The funny thing is, before the sprint I had never seen what HDR picture even looks like! People around talked about that, shops listed displays with HDR support, documentation mentioned that, but what all this buzz was about? My original understanding was like "Krita passes 16-bit color to OpenGL, so we should already be ready for that". In Deventer I managed to play with Boud's display, which is basically one of few certified HDR displays with support of 1000 nits brightness, and found out that my original understanding was entirely wrong :)

Last couple of years the computer display industry has changed significantly. For almost 30 years people expected their monitors to look like normal paper. Displays were calibrated to look like a sheet of white paper illuminated by a strictly defined source of light (D65).

Now, with appearance of HDR technology, the things has changed. Displays don't try to emulate paper, they try to resemble real sources of light! For example, modern displays can show a sun beam piercing through a window not just like "a paper photo of a beam", but just as a real source of light getting directly into you eye! Basically, now the displays have LEDs that can shine as bright as a real sun, so why not use it? :) (Well, the display is only 1000 nits, and the sun 1.6 billion nits, but it's still very bright).

If you look at the original original EXR file you will see how the window "shines" from your screen, as if it were real
By itself, the idea of having a display that can send a sun-strength beam into your eye might be not a lot of fun, but the side effects might of the technology are quite neat.

In the first place, the displays supporting HDR do not work in the standard sRGB color space! Instead they use Rec. 2020, a color space widely used in cinematography. It has different primary colors for "green" and “red” channels, which means it can encode much more variations of greenish and reddish colors.

In the second place, instead of using traditional exponential gamma correction, they use Perceptual Quantizer (PQ), which not just extends the dynamic range to sun-bright values, but also allows to encode very dark areas, not available in usual sRGB.

Finally, all HDR displays transfer data in 10-bit mode! Even if one doesn't need real HDR features, having a 10-bit pipeline can improve both painters' and designers' workflow a lot!

Technical details

From the developer's point of view, the current state of HDR technology is a bit of a mess. It's really early days. The only platform where HDR is supported at the moment is Windows 10 (via DirectX).

Neither Linux nor OSX can handle the hardware in HDR mode currently. Therefore all the text below will be related to Windows-only case.

When the user switches the display into HDR mode, the OS automatically starts to communicate with it in p2020-pq mode. Obviously, all the colors that normal applications render in sRGB will be automatically converted. That is, if an application wants to render an image directly in p2020-pq, it should create a special "framebuffer" object (swap chain), set its colorspace to p2020-pq and ensure that all the internal textures have correct color space and bit depth.

In general, to ensure that the window is rendered in HDR mode, one should do the following:

  1. Create a DXGI swap chain with 10-bit or 16-bit pixel format
  2. Set the color space of that swap chain to either p2020-pq (for 10-bit mode) or scRGB (for 16-bit mode).
  3. Make sure all the intermediate textures/surfaces are rendered in 10/16-bit mode (to avoid loss of precision)
  4. Since the GUI is usually rendered on the same swap chain, one should also ensure that the GUI is converted from sRGB into the destination color space (either p2020-pq or scRGB)
In Krita we use Qt to render everything, including our OpenGL canvas widget. I had to go deep into Qt's code to find out that Qt unconditionally uses 8-bit color space for rendering windows. Therefore, even though Krita passes 16-bit textures to the system, the data is still converted into 8-bits somewhere in Qt/OpenGL. So I had to hack Qt significantly...

Making Qt and Angle support HDR

Here comes the most interesting part. We use Qt to access the system's OpenGL implementation. Traditionally, Qt would forward all our requests to the OpenGL implementation of the GPU driver, but... The problem is that quite a lot of OpenGL drivers on Windows are of "suboptimal quality". The quality of the drivers is so "questionable", that people from the Google Chromium project even wrote a special library that converts OpenGL calls into DirectX API calls and use it instead of calling OpenGL directly. The library is called Angle. And, yes, Qt also uses Angle.

Below is a sketch, showing the relation between all the libraries and APIs. As you can see, Angle provides two interfaces: EGL for creating windows, initializing surfaces configuring displays and traditional OpenGL for the rendering itself.

To allow switching of the surface's colorspace I had to hack Angle's EGL interface and, basically, implement three extensions for it:

After that I had to patch QTextureFormat a to support all these color spaces (previously, it supported sRGB only). So now, if you configure the default format before creating a QGuiApplication object, the frame buffer object (swap chain) will have it! :)

// set main frame buffer color space to scRGB
QSurfaceFormat fmt;
// ... skipped ...
fmt.setColorSpace(QSurfaceFormat::scRGBColorSpace);
QSurfaceFormat::setDefaultFormat(fmt);

// create the app (also initializes OpenGL implementation
// if compiled dynamically)
QApplication app(argc, argv);
return app.exec();

I have implemented a preliminary demo app that uses a patched Qt and shows an EXR image in HDR mode. Please check out the source code here:

Demo application itself:
https://github.com/dimula73/hdrtest/tree/test-hacked-hdr

Patched version of QtBase (based on Qt 5.11.2):
https://github.com/dimula73/qtbase/tree/krita-hdr

The application and Qt's patch are still work-in-progress, but I would really love to hear some feedback and ideas from KDE and Qt community about it. I would really love to push this code upstream to Qt/Angle when it is finished! :)

List of things to be done

  1. Color-convert Qt's internal GUI textures. Qt renders the GUI (like buttons, windows and text boxes) in a CPU memory (in sRGB color space), then uploads the stuff into an OpenGL texture and renders that into a frame buffer. Obviously, when the frame buffer is tagged with a non-sRGB color space, the result is incorrect --- the GUI becomes much brighter or darker than expected. I need to somehow mark all Qt's internal textures (surfaces?) with a color space tag, but I don't yet know how... Does anybody know how?
  2. Qt should also check if the extensions are actually supported by the implementation, e.g. when Qt uses a driver-provided implementation of OpenGL. Right now it tries to use the extensions without any asking :)
  3. The most difficult problem: OpenGL does not provide any means of finding out if a combination of frame buffer format and color space is actually supported by the hardware! A conventional way to check if the format/color space is supported: create an OpenGL frame buffer and set the desired color space. If the calls do not fail with error, then the mode is supported. But it doesn't fit the way how Qt selects it! Qt expects one to call a singleton QSurfaceFormat::setDefaultFormat() once and then proceed to creation of the application. If the creation fails, there is no way to recover! Does anybody have an idea how that could be done in the least hackish way?

I would really love to hear your questions and feedback on this topic! :)

Saturday, August 2, 2014

Krita: illustrated beginners guide in Russian

Some time ago our user Tyson Tan (creator of Krita's mascot Kiki) published his beginners guide for Krita. Now this tutorial is also available in Russian language!

If you happen to know Russian, please follow the link :)


слайд

Krita: иллюстрированное руководство начинающего художника на русском



Совсем недавно наш пользователь Тайсон Тан создал замечательное руководство для начинающих пользователей Криты. А благодаря переводу  Георгия Сыпченко теперь это руководство доступно и на русском. Щелкаем на картинку и изучаем!



Monday, July 14, 2014

Notes from Calligra Sprint. Part 2: Memory fragmentation in Krita fixed

During the second day of Calligra sprint in Deventer we split into two small groups. Friedrich, Thorsten, Jigar and Jaroslaw were discussing global Calligra issues, while Boud and me concentrated on the performance of Krita and its memory consumption.

We tried to find out why Krita is not fast enough for painting with big brushes on huge images. For our tests we created a two-layer image 8k by 8k pixels (which is 3x256 MiB (2 layers + projection)) and started to paint with 1k by 1k pixels brush. Just to compare, SAI Painting Tool simply forbids creating images more than 5k by 5k pixels and brushes more than 500 pixels wide. And during these tests we found out a really interesting thing...

I guess everyone has at least once read about custom memory management in C++. All these custom new/delete operators, pool allocators usually seem so "geekish" and for "really special purposes only". To tell you the truth, I though I would never need to use them in my life, because standard library allocators "should be enough for everyone". Well, until curious things started to happen...

Well, the first sign of the problems appeared quite long ago. People started to complain that according to system monitor tools (like 'top') Krita ate quite much memory. We could never reproduce it. And what's more 'massif' and internal tile counters always showed we have no memory leaks. We used exactly the number of tiles we needed to store the image of a particular size.

But while making these 8k-image tests, we started to notice that although the number of tiles doesn't grow, the memory reported by 'top' grows quite significantly. Instead of occupying usual 1.3 GiB, which such image would need (layers data + about 400MiB for brushes and textures) reported memory grew up to 3 GiB and higher until OOM Killer woke up and killed Krita. This gave us a clear evidence that we have some problems with fragmentation.

Indeed, during every stoke we have to create about 15000(!) 16KiB objects (tiles). It is quite probable that after a couple of strokes the memory becomes rather fragmented. So we decided to try boost::pool for allocation of these chunks... and it worked! Instead of growing the memory footprint stabilized on 1.3GiB. And that is not counting the fact that boost::pool doesn't free the free'd memory until destruction or explicit purging [0]

Now this new memory management code is already in master! According to some synthetic tests, the painting should become a bit fasted. Not speaking about the much smaller memory usage.

Conclusion:

If you see unusually high memory consumption in your application, and the results measured by massif significantly differ from what you see in 'top', you probably have some fragmentation problem. To proof it, try not to return the memory back to the system, but reuse it. The consumption might fall significantly, especially is you allocate memory in different threads.



[0] - You can release unused memory by explicitly calling release_memory(), but 1) the pool must be ordered, which is worse performance; 2) the release_memory() operation takes about 20-30 seconds(!), so there is no use of it for us.



Sunday, July 13, 2014

Notes from Calligra Sprint in Deventer. Part 1: Translation-friendly code

Last weekend we had a really nice sprint Deventer, which was hosted by Irina and Boudewijn (thank you very much!). We spent two days on discussions, planning, coding and profiling our software, which had many fruitful results.

On Saturday we were mostly talking and discussing our current problems, like porting Calligra to Qt5 and splitting libraries more sanely (e.g. we shouldn't demand mobile applications compile and link QWidget-based libraries). Although these problems are quite important, I will not describe them now (the other people will blog about it very soon). Instead I'm going to tell you about a different problem we also discussed — translations.

The point is, when using i18n() macro, it is quite easy to make mistakes which will make translator's life a disaster, so we decided to make a set of rules of thumb which developers should follow for not creating such issues. Here are these five short rules:

  1. Avoid passing a localized string into a i18n macro
  2. Add context to your strings
  3. Undo commands must have (qtundo-format) context
  4. Use capitalization properly
  5. Beware of sticky strings
Next we will talk about each of the rules in details:

1. Avoid passing a localized string into a i18n macro

They might be not compatible in case, gender or anything else you have no idea about

// Such code is incorrect in 99% of the cases
QString str = i18n(“foo bar”);
i18n(“Some nice string %1”, str);


Example 1

// WRONG:
wrongString = i18n(“Delete %1”, XXX ? i18n(“Layer”) : i18n(“Mask”))

// CORRECT:

correctString = XXX ? i18n(“Delete Layer”) : i18n(“Delete Mask”)
 

Such string concatenation is correct in English, but it is completely inappropriate in many languages in which a noun can change its form depending on the case. The problem is that in macro i18n(“Mask”) the word "Mask" is used in nominative case (is a subject), but in expression "Delete Mask” it is in accusative case (is an object). For example is Russan the two strings will be different and the translator will not be able to solve the issue easily.

Example 2

// WRONG:
wrongString = i18n(“Last %1”, XXX ? i18n(“Monday”) : i18n(“Friday”))

// CORRECT:
correctString = XXX ? i18n(“Last Monday”) : i18n(“Last Friday”)

This case is more complicated. Both words "Monday" and "Friday" are used in the nominative case, so they will not change their form. But "Monday" and "Friday" have different gender in Russian, so the adjective "Last" must change its form depending on the second word used. Therefore we need to separate strings for the two terms.

The tricky thing here is that we have 7 days in a week, so ideally we should have 7 separate strings for "Last ...", 7 more strings for "Next ..." and so on.

Example 3 — Using registry values

// WRONG:
KisFilter *filter = filterRegistry->getFilter(id);
i18n(“Apply %1”, filter->name())

// CORRECT: is there a correct way at all?
KisFilter *filter = filterRegistry->getFilter(id);
i18n(“Apply: \”%1\””, filter->name())

Just imagine how many objects can be stored inside the registry. It can be a dozen, a hundred or a thousand of objects. We cannot control the case, gender and form of each object in the list (can we?). The easiest approach here is to put the object name in quotes and "cite" that literally. This will hide the problem in most of the languages.

2. Add context to your strings

Prefer adding context to your strings rather than expecting translators reading your thoughts

Here is an example of three strings for blur filter. They illustrate the three most important translation contexts

i18nc(“@title:window”, “Blur Filter”)

Window titles are usually nouns (and translated as nouns). There is no limit on the size of the string.

i18nc(“@action:button”, “Apply Blur Filter”)

Button actions are usually verbs. The length of the string is also not very important.

i18nc(“@action:inmenu”, “Blur”)

Menu actions are also verbs, but the length of the string should be as short as possible.

3. Undo commands must have (qtundo-format) context

Adding this context tells the translators to use “Magic String” functionality. Such strings are special and are not reusable anywhere else.

In Krita and Calligra this context is now added automatically, because we use C++ type-checking mechanism to limit the strings passed to an undo command:

KUndo2Command(const KUndo2MagicString &text, KUndo2Command *parent);

4. Use capitalization properly

See KDE policy for details.

5. Beware of sticky strings

When the same string without a context is reused in different places (and especially in different files), doublecheck whether it is appropriate.

E.g. i18n("Duplicate") can be either a brush engine name (noun) or a menu action for cloning a layer (verb). Obviously enough not all the languages have the same form of a word for both verb and noun meanings. Such strings must be split by assigning them different contexts.

Alexander Potashev has created a special python script that can iterate through all the strings in a .po file and report all the sticky strings in a convenient format.

Conclusion

Of course all these rules are only recommendation. They all have exceptions and limitations, but following them in the most trivial cases will make the life of translators much easier.

In the next part of my notes from the sprint I will write how Boud and me were hunting down memory fragmentation problems in Krita on Sunday... :)

Saturday, July 5, 2014

Calligra Sprint 2014 has started!

Calligra Sprint 2014 has started! Today you have a unique chance to ask developers any questions about Calligra, Krita and Kexi on Twitter on a hash tag #AskCalligraDevs

Followers