Quantcast
Channel: Planet Qt
Viewing all 15410 articles
Browse latest View live

KDAB on Qt: Heaptrack v1.1.0 release

$
0
0

After more than a year of work, I’m pleased to release another version of heaptrack, the Linux memory profiler! The new version 1.1.0 comes with some new features, significant performance improvements and – most importantly – much improved stability and correctness. If you have tried version v1.0 in the past and encountered problems, update to the new v1.1 and try again!

Notable Changes

The most effort during this release cycle was spent on improving the correctness of heaptrack. The initial version suffered from bugs that could lead to corrupted or truncated data files. Heaptrack v1.1 is generally much better in this regard, and you should be able to use it in more situations than before. Furthermore, attaching heaptrack to an already-running process will catch more allocations and thus produce more accurate data. To verify the quality of the heaptrack code base, more tests have been added as well. These tests also finally enable us to use Valgrind or the Sanitizers on most of the heaptrack code, which wasn’t possible previously.

Additionally, some important new features have been added which greatly improve the usability of heaptrack:

  1. When extended debug information is available, stack traces now include inlined frames.
  2. Split debug information in separate files is now supported.
  3. Compressed debug information is properly handled.
  4. The embedded flamegraph view is now searchable.

Finally, quite some work went into optimizing heaptrack to further reduce its overhead. The initial version was quite good already from a performance point of view, but version 1.1 is even better! Most notably, the analysis of large data files is now often much faster. This is in great parts due to the new optional dependency on zstd. This fantastic state-of-the-art compression algorithm greatly reduces the CPU overhead during recording while compressing the heaptrack data. But not only that – decompression at analysis time is significantly reduced compared to the standard gzip compression. In case you wonder: Data files are now often slightly smaller too!

Last but not least, heaptrack v1.1.0 can be downloaded as a portable AppImage which should run on most 64bit Linux systems in use these days!

Download heaptrack v1.1.0

If possible, wait for your distribution to provide you with an updated package for heaptrack v1.1.0. Otherwise, download the AppImage, make it executable and run it. If neither of these two options works for you, grab the sources and compile the code for your target platform:

The GPG signatures have been created by Milian Wolff with the key A0C6B72C4F1C5E7C.

Many thanks to the various people who contributed to this release. Please continue to hand in your patches, preferably via KDE’s phabricator instance or via heaptrack on GitHub. Bugs can be reported on bugs.kde.org.

If your company needs commercial support for heaptrack, then get in touch with us at KDAB. We offer workshops and trainings specifically about profiling and debugging on Linux.

The post Heaptrack v1.1.0 release appeared first on KDAB.


Sune Vuorela: Modern C++ and Qt – part 2.

$
0
0

I recently did a short tongue-in-cheek blog post about Qt and modern C++. In the comments, people discovered that several compilers effectively can optimize std::make_unique<>().release() to a simple new statement, which was kind of a surprise to me.

I have recently written a new program from scratch (more about that later), and I tried to force myself to use standard library smartpointers much more than what I normally have been doing.

I ended up trying to apply a set of rules for memory handling to my code base to try to see where it could end.

  • No naked delete‘s
  • No new statements, unless it was handed directly to a Qt function taking ownership of the pointer. (To avoid sillyness like the previous one)
  • Raw pointers in the code are observer pointers. We can do this in new code, but in older code it is hard to argue that.

It resulted in code like

m_document = std::make_unique();
    auto layout = std::make_unique();
    auto textView = std::make_unique();
    textView->setReadOnly(true);
    textView->setDocument(m_document.get());
    layout->addWidget(textView.release());
    setLayout(layout.release());

By it self, it is quite ok to work with, and we get all ownership transfers documented. So maybe we should start code this way.

But there is also a hole in the ownership pass around, but given Qt methods doesn’t throw, it shouldn’t be much of a problem.

More about my new fancy / boring application at a later point.

I still haven’t fully embraced the c++17 thingies. My mental baseline is kind of the compiler in Debian Stable.

The Qt Company Blog: Calling all contributors!

$
0
0

QtCS

One month to go till Qt Contributors’ Summit in Oslo!

The dates are June 11-12.

If you haven’t yet, now is the time to register!

There will be two intense days of discussions and work on the future of Qt. Anyone who has an interest and has in some way had an influence on the Qt project is welcome to the event to talk and share their views on the current state and future of Qt.

You can find the agenda on the event wiki page. Now is the time to add your topic on the list.

Please note that when you do add a topic, you need to prepare the session, and I will check that you have prepared. Also as a change from previous years, we will not automatically make wiki pages of the sessions (people are free to do so naturally), but put the tasks that come out of the sessions to bugreports, where they are more likely to be followed up.

At this point I would like to thank the sponsors of the event for making this possible!

KDAB-300x204

luxoft-logo

froglogic

 

 

 

1280px-intel-logo-svg

 

logo

 

Thank you and see you in Oslo!

The post Calling all contributors! appeared first on Qt Blog.

The Qt Company Blog: What’s new with the Wayland platform plugin in Qt 5.11?

$
0
0

Wayland is a display server protocol used on modern Linux systems, the Qt Wayland platform plugin lets Qt applications run on Wayland display servers (compositors).

Apart from bug fixes, the Qt 5.11 release contains a substantial amount of improvements, especially for desktop users.

Key composition support

key-composition

Support for compose keys has been missing for a long time and has finally been added. That means you can now enter characters that require a sequence of keys, such as:

  • ¨, A to write “ä”
  • compose key, S, S to write “ß”

Qt Wayland in official binaries

Starting with Qt 5.11 and Qt Creator 4.7, binaries in the official installers now also include Qt Wayland (previously you would have to build it yourself).

So the official build of Qt Creator itself now runs on Wayland, as well as the applications you build with the official Qt packages.

UPDATE: It’s still unclear whether QtWayland will be in the official release of Qt Creator 4.7.0. Due to a mistake, Qt Wayland was made the default platform plugin on gnome-shell sessions in Qt 5.11. The combination of gnome-shell and Qt Wayland still results in too many bugs, and hence Qt Wayland was removed from the Qt Creator pre-release builds altogether, at least until Qt Wayland is made opt-in again.

Qt creator 4.7 nightly running on Wayland

Qt creator 4.7 nightly running on Wayland

There are nightlies for QtCreator 4.7 available if you want to try it out before the official release.

Fallback to X11 if Wayland is not available

The common way of selecting a Qt platform plugin, has been to set the environment variable QT_QPA_PLATFORM=wayland. This has been a problem on Linux desktops, because some applications—for instance the official QtCreator package—use a bundled version of Qt that doesn’t include Wayland, and will fail to launch with the following message:

This application failed to start because it could not find or load the Qt platform plugin "wayland" in "".

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.

Reinstalling the application may fix this problem.

In Qt 5.11 we added support for fallback platform plugins, this means you can now set QT_QPA_PLATFORM="wayland;xcb", which makes Qt use the xcb (X11) plugin if Wayland is not available.

Improved high-dpi support

automatic-dpi-change
If you have a multi-monitor setup with both high-dpi and low-dpi screens, you’ll be happy to hear that windows now switch to the appropriate scale when moved from one screen to another. No more tiny or blurry windows 🙂

Testing and continuous integration

QA-wise Qt Wayland has seen significant improvements lately. We now run a subset of the QtBase unit tests on every patch set submitted, which means we will catch more bugs earlier. However, this is a topic suitable for a separate blog post.

News from the development branch

There has also been many recent changes that didn’t make it into the 5.11 release. State changes, such as resizing or maximizing have seen a lot of work. We now finally support maximizing and full screen on xdg-shell-v6. We have also added a new shell integration for xdg-shell stable.

Qt Wayland backports repository

If you want to test the new features and fixes in Qt Wayland, but don’t want to wait for a release, or if you don’t want to upgrade or compile all of Qt, I have set up an unofficial qtwayland-backports repository.

It contains branches with new versions of Qt Wayland that compile against older versions of Qt. I.e. if you use Qt 5.10.x, you can still test recent changes in the Qt Wayland dev branch using the dev-for-5.10 branch.

Arch Linux users can install the AUR package, qt5-wayland-dev-backport-git, as a drop-in replacement for qt5-wayland. Again, note that these backports are unofficial and there are no guarantees that I will keep updating them.

The post What’s new with the Wayland platform plugin in Qt 5.11? appeared first on Qt Blog.

The Qt Company Blog: Qt Creator’s Clang Code Model

$
0
0

Starting with the upcoming Qt Creator 4.7, the Clang Code Model is enabled by default. That’s a great time to look at the differences between our old code model and the new Clang Code Model. But first things first.

History of C/C++ Support in Qt Creator

Since the beginning of Qt Creator the C/C++ support was implemented around a custom C++ frontend (lexer, preprocessor, parser, lookup). The whole support was referred to as the “C/C++ Code Model”, the code model being the collection of language-specific services, for example code completion and semantic highlighting.

Back then the next C++ standard was a long time in the coming (C++0x – C++1x – C++11), the tooling support from Clang was not where it is today and  a custom C++ front-end gave us some extra flexibility when it comes to performance, error recovery and support of Qt-specifics. The code model around the custom front-end served us well (and still does) – the appropriate trade-offs between precision and performance were made back then. However, maintaining a custom C++ front-end is not a trivial task, notably so during the interesting times for the company we were part of back then, and with only a few developers on it. With the availability of Clang and its tooling, especially from the point on where it became self-hosting, we did some experiments to base the code model on it – the “Clang Code Model” was born. The experiments looked promising in general, but the stability and performance were a problem from the beginning on, especially when considering all platforms.

Fast forward: today C++ evolves much faster, Clang and its tooling are prospering and we have picked up working on the Clang Code Model.

Status of the Clang Code Model

We believe to have addressed the most severe performance and stability issues by now. With the Clang Code Model you get up to date language support based on libclang 6.0, greatly improved precision and diagnostics.

The first big area we have tackled are the services related to the currently open file, not taking any project or global index information into account yet, which is work in progress. Currently, the following services are implemented with the Clang Code Model:

  • Code completion
  • Syntactic and semantic highlighting
  • Diagnostics with fixits and integrated Clang-Tidy and Clazy checks
  • Follow Symbol (partly)
  • Outline of symbols
  • Tooltips
  • Renaming of local symbols

For the not yet ported services, the services based on the custom frontend are used. This includes for example indexing, find usages and refactoring.

Due to Clang’s precision the Clang Code Model is inherently slower than the old code model and has lower error recovery capabilities. However, the extra precision and diagnostics will result in less build errors and thus reduce your edit-build-cycle count.

Differences to the old code model

Now what are the visible changes that you can observe as a user?

Updated language support

The Clang Code Model is based on libclang 6.0 and as such it can parse C++17 and more.

Precision

You will immediately notice the improved precision in highlighting and diagnostics.

For example, our custom front-end never validated function calls and thus you would notice these when building. With the Clang Code Model only valid function calls are properly highlighted, whereas invalid ones will be rendered as “Text”, that is, black by default.

Another example is code completion. Items are no longer offered for declarations that are below your completion position, except class members of course. Also, completion of const objects will take the “constness” into account.

Diagnostics

Chances are that you will notice Clang’s diagnostics early on, as they are displayed as inline annotations in the editor. Detailed information is provided by tooltips. Check out for the light bulb at the end of the line, as these indicate the availability of “Fixits”. These are small/local refactoring actions fixing diagnostics.

Of course the diagnostics for the current document are also available in the Issues pane. This behavior can be disabled by using the “Filter by categories” icon in the Issues pane toolbar.

You can set up diagnostic configurations in C++ > Code Model >“Manage…” in the options dialog. The predefined configurations are really a starting point and it is recommended to adapt them to your needs. For example, you could set up a configuration for a specific project.

Note that Clang-Tidy and Clazy checks are integrated, too. Enabling these checks for the Clang Code Model will naturally slow down re-parsing, but depending on your machine and the specific selection of checks this can be a great help. Starting from this version, it is also possible to run Clang-Tidy and Clazy checks for the whole project or a subset of it (Menu: Analyze >“Clang-Tidy and Clazy…”). For example, you could set up a diagnostic configuration for this mode that also enables the expensive checks and run it once in a while.

Code Completion

In general, the completion is more context-sensitive now. For example, completing after “switch (“, “case “, “int foo = ” or after “return ” will put more relevant items to the top of the completion list.

The completion of templated classes and functions is improved. Completion of unique_ptr<>/shared_ptr<> objects and friends works now on all platforms.

Does your code base make use of doxygen comments? Then you will probably be happy to see doxygen comments as part of the completion item, too.

Miscellaneous

Italic arguments in a function call indicate that the function call might modify the argument (“Output Argument” in the text editor settings).

Tooltips resolve “auto” properly and show also doxygen comments.

A visible document is automatically re-parsed if one of its headers is modified, either in an editor or on disk.

Future of C/C++ Support

Given the evolving tooling around C++ it is pretty unlikely that we will go back to a custom front-end.

Given the raise of the Language Server Protocol and the development of clangd, how to proceed from here on? The implementation of an LSP client makes sense for any IDE to also get support for other languages. And we are actively working on that. Having that, it will also help us to evaluate clangd further.

The post Qt Creator’s Clang Code Model appeared first on Qt Blog.

The Qt Company Blog: Qt Creator 4.7 Beta released

$
0
0

We are happy to announce the release of Qt Creator 4.7 Beta!

C++ Support

The greatest improvements were again done for our Clang based C++ support. First of all we made the Clang code model the default for Qt Creator 4.7. That is quite a milestone after years of experimenting and developing, so Nikolai has wrapped up the history and current state in a separate blog post. I’ll just summarize some of the most important changes in 4.7 here.

We did another upgrade of the backend to Clang 6.0, which was released this March. This brings all its improvements like new and fixed diagnostics to Qt Creator as well.

The outline pane and dropdown, the locator filter for symbols in the current document, and Follow Symbol inside the current translation unit (meaning current document and its includes) are now based on Clang too.

You can now perform the static checks from Clang-Tidy and Clazy on the whole project or a subset. This is implemented as a new analyzer tool in Debug mode (Analyzer>Clang-Tidy and Clazy). We also improved the settings, and fixed that the static checks had a performance impact on completion.

Note that you can opt-out of using Clang code model by manually disabling the ClangCodeModel plugin in Help>About Plugins (Qt Creator>About Plugins on macOS). That said, it is very important that we get as much feedback as possible at this stage, so please create bugreports for all issues that you find.

QML Support

The QML code model now includes minimal support for user defined enums, which is a new feature in Qt 5.10 and later. Additionally some reformatting errors have been fixed.

Test Integration

If your text cursor in the C++ editor is currently inside a test function, you can directly run that individual test with the new Run Test Under Cursor action. The test integration now also marks the location of failed tests in the editor. For Google Test we added support for filtering.

Other Improvements

The kit options got their own top-level entry in the preferences dialog, which is also the very first one in the list. In the file system view you can now create new folders. By default it now shows folders before files, but you can opt-out to the previous, purely alphabetic sorting by unchecking the option in the filter button menu. In the context menu on file items at various places, for example the projects tree, file system view and open documents list, there is a new item that opens a properties dialog, showing name, path, mime type, permissions and many more properties of the file.

There have been many more improvements and fixes. Please refer to our changes file for a more comprehensive list.

Get Qt Creator 4.7 Beta

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.7 Beta is also available under Preview>Qt Creator 4.7.0-beta1 in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

 

Note on Wayland support: Even though stated differently in Johan’s blog post, we explicitly disabled the Wayland platform plugin for Qt Creator 4.7 beta1. It broke too heavily in some setups. We will look into adding it again as an opt-in option after the beta.

The post Qt Creator 4.7 Beta released appeared first on Qt Blog.

The Qt Company Blog: Building a Bridge from Qt to DDS

$
0
0

In our previous posts, we looked into various aspects of using Qt in a telemetry scenario. Part one focused on reducing message overhead, while part two was about serialization.

To demonstrate a typical IoT scenario, we used MQTT as a protocol and the Qt MQTT module available in Qt for Automation. However, the landscape of protocols in the automation world is bigger and different protocols provide different advantages, usually with a cost involved.

A well-known weakness of MQTT is that it relies on a server instance. This implies that all nodes talk to one central place. This can easily become a communication bottleneck. For MQTT version 3.1.1 many broker providers implemented their own solutions to tackle this issue, and to some extent, this got taken care of in MQTT version 5. Those solutions do add additional servers, which sync with each other, but do not remove the need for a server completely.

One prominent protocol, which allows for server-less communication is the Data Distribution Service (DDS). DDS is a standard available via the Object Management Group, a full description is available on their website.

In addition to the D2D communication capabilities, DDS includes a very interesting design approach, which is data-centric development. The idea behind data centricity is that you as a developer would not need to care about how data is transferred and/or synced between nodes, the protocol handles all of this in the background. While this is a convenience, optimizations like in our previous posts are possible in a limited fashion only.

Qt does not provide a module integration for DDS. But after all, existing implementations are available written in C++ and consequently using both technologies in a project is doable. Following, we will go through the steps to create a Data-Centric Publish-Subscribe (DCPS) application with DDS.

In this example we are going to use the DDS implementation by RTI, which has the highest market adoption currently. Nevertheless, a couple of alternatives do exist like Vortex OpenSplice or OpenDDS. The design principles stay the same for any of those products.

To be able to sync data of the same type on all ends, a design pattern in form of a IDL language is required. Similar to protobuf, the sensor design is the following

/* Struct to describe sensor */
struct sensor_information {
    string ID; //@key
    double ambientTemperature;
    double objectTemperature;
    double accelerometerX;
    double accelerometerY;
    double accelerometerZ;
    double altitude;
    double light;
    double humidity;
};

To convert the IDL to source code, a tool called rtiddsgen is invoked during the build process. To integrate it into qmake, an extra compiler step is required.

RTIDDS_IDL = ../common/sensor.idl
ddsgen.output = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.cxx # Additionally created files get their own rule
ddsgen.variable_out = GENERATED_SOURCES
ddsgen.input = RTIDDS_IDL
ddsgen.commands = $${RTIDDS_PREFIX}\\bin\\rtiddsgen -language c++ -d $${OUT_PWD} ${QMAKE_FILE_NAME}

QMAKE_EXTRA_COMPILERS += ddsgen

Rtiddsgen generates more than one source and one header file. For each IDL (here sensor.idl), those additional files are created

  • Sensor.cxx / .h
  • SensorPlugin.ccxx / .h
  • SensorSupport.cxx / .h

Especially the source files need to become part of the project. Otherwise, they will not get compiled and you will recognize missing symbols in the linking phase.

Even more, adding extra compiler steps implement a clean step to remove generated files again. If not all files are removed properly before reinvoking rtiddsgen, the tool does not generate correct code anymore and causes various compile errors.

To fix this, one additional compile step is created for each file generated. Those are

ddsheadergen.output = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.h
ddsheadergen.variable_out = GENERATED_FILES
ddsheadergen.input = RTIDDS_IDL
ddsheadergen.depends = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.cxx
ddsheadergen.commands = echo "Additional Header: ${QMAKE_FILE_NAME}"

ddsplugingen.output = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}Plugin.cxx
ddsplugingen.variable_out = GENERATED_SOURCES
ddsplugingen.input = RTIDDS_IDL
ddsplugingen.depends = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.cxx # Depend on the output of rtiddsgen
ddsplugingen.commands = echo "Additional Source(Plugin): ${QMAKE_FILE_NAME}"

ddspluginheadergen.output = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}Plugin.h
ddspluginheadergen.variable_out = GENERATED_FILES
ddspluginheadergen.input = RTIDDS_IDL
ddspluginheadergen.depends = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.cxx
ddspluginheadergen.commands = echo "Additional Header(Plugin): ${QMAKE_FILE_NAME}"

ddssupportgen.output = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}Support.cxx
ddssupportgen.variable_out = GENERATED_SOURCES
ddssupportgen.input = RTIDDS_IDL
ddssupportgen.depends = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.cxx # Depend on the output of rtiddsgen
ddssupportgen.commands = echo "Additional Source(Support): ${QMAKE_FILE_NAME}"

ddssupportheadergen.output = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}Support.h
ddssupportheadergen.variable_out = GENERATED_FILES
ddssupportheadergen.input = RTIDDS_IDL
ddssupportheadergen.depends = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.cxx
ddssupportheadergen.commands = echo "Additional Header(Support): ${QMAKE_FILE_NAME}"

QMAKE_EXTRA_COMPILERS += ddsgen ddsheadergen ddsplugingen ddspluginheadergen ddssupportgen ddssupportheadergen

Those compile steps do nothing but write the filename which has been generated in the previous step, but allow for proper cleanup. Note that setting the dependencies of the steps correctly is important. Otherwise, qmake might invoke the steps in the wrong order and try to compile a not-yet-generated source file.

Moving on to C++ source code, the steps are rather straight-forward, though there are some nuances between creating a publisher and a subscriber.

Generally, each application needs to create a participant. A participant registers itself to the domain and allows communication to all other devices, or the cloud. Next, a participant creates a topic allowing data transmission via a dedicated channel. Participants, who are not registered to the topic will not receive messages. This allows for filtering and reducing data transfer.

Following a publisher is created, which then again creates a datawriter. Quoting from the standard: “A Publisher is an object responsible for data distribution. It may publish data of different data types. A DataWriter acts as a typed accessor to a publisher.” Beforehand, this applies to subscribers as well.

    const DDS_DomainId_t domainId = 0;
    DDSDomainParticipant *participant = nullptr;

    participant = DDSDomainParticipantFactory::get_instance()->create_participant(domainId,
                                                                                  DDS_PARTICIPANT_QOS_DEFAULT,
                                                                                  NULL,
                                                                                  DDS_STATUS_MASK_NONE);
    if (!participant) {
        qDebug() << "Could not create participant."; 
        return -1;
    } 

    DDSPublisher *publisher = participant->create_publisher(DDS_PUBLISHER_QOS_DEFAULT,
                                              NULL,
                                              DDS_STATUS_MASK_NONE);
    if (!publisher) {
        qDebug() << "Could not create publisher.";
        return -2;
    }

    const char *typeName = sensor_informationTypeSupport::get_type_name();
    DDS_ReturnCode_t ret = sensor_informationTypeSupport::register_type(participant, typeName);
    if (ret != DDS_RETCODE_OK) {
        qDebug() << "Could not register type."; 
        return -3; 
    } 

    DDSTopic *topic = participant->create_topic("Sensor Information",
                                                typeName,
                                                DDS_TOPIC_QOS_DEFAULT,
                                                NULL,
                                                DDS_STATUS_MASK_NONE);
    if (!topic) {
        qDebug() << "Could not create topic."; 
        return -4; 
    } 
    DDSDataWriter *writer = publisher->create_datawriter(topic,
                                                         DDS_DATAWRITER_QOS_DEFAULT,
                                                         NULL,
                                                         DDS_STATUS_MASK_NONE);
    if (!writer) {
        qDebug() << "Could not create writer.";
        return -5;
    }

The writer object is a generic so far. We aim to have a data writer specific to the sensor information we created with the IDL. The sensorSupport.h header provides a method declaration to do exactly this

sensor_informationDataWriter *sensorWriter = sensor_informationDataWriter::narrow(writer);

To create a sensor data object, we also use the support methods

    sensor_information *sensorInformation = sensor_informationTypeSupport::create_data();

If a sensorInformation instance is now supposed to publish its content, this is achieved with a call to write()

    ret = sensorWriter->write(*sensorInformation, sensorHandle);

After this call, the DDS framework takes care of publishing the object to all other subscribers.

For creating a subscriber, most steps are the same as for a publisher. But instead of using a narrowed DDSDataReader, RTI provides listeners, which allow to receive data via a callback pattern. A listener needs to be passed to the subscriber when creating a data reader

ReaderListener *listener = new ReaderListener();     DDSDataReader *reader = subscriber->create_datareader(topic,                                                          DDS_DATAREADER_QOS_DEFAULT,                                                          listener,                                                          DDS_LIVELINESS_CHANGED_STATUS |                                                          DDS_DATA_AVAILABLE_STATUS);

The ReaderListener class looks like this

    
class ReaderListener : public DDSDataReaderListener {
  public:
    ReaderListener() : DDSDataReaderListener()
    {
        qDebug() << Q_FUNC_INFO;
    }
    void on_requested_deadline_missed(DDSDataReader *, const DDS_RequestedDeadlineMissedStatus &) override
    {
        qDebug() << Q_FUNC_INFO;
    }

    void on_requested_incompatible_qos(DDSDataReader *, const DDS_RequestedIncompatibleQosStatus &) override
    {
        qDebug() << Q_FUNC_INFO;
    }

    void on_sample_rejected(DDSDataReader *, const DDS_SampleRejectedStatus &) override
    {
        qDebug() << Q_FUNC_INFO;
    }

    void on_liveliness_changed(DDSDataReader *, const DDS_LivelinessChangedStatus &status) override
    {
        // Liveliness only reports availability, not the initial state of a sensor
        // Follow up changes are reported to on_data_available
        qDebug() << Q_FUNC_INFO << status.alive_count;
    }

    void on_sample_lost(DDSDataReader *, const DDS_SampleLostStatus &) override
    {
        qDebug() << Q_FUNC_INFO;
    }

    void on_subscription_matched(DDSDataReader *, const DDS_SubscriptionMatchedStatus &) override
    {
        qDebug() << Q_FUNC_INFO;
    }

    void on_data_available(DDSDataReader* reader) override;
};

As you can see the listener is able to report a lot of information. In our example, though, we are mostly interested in the receival part of data, trusting that the other parts are doing fine.

On_data_available has a DDSDataReader argument, which will be narrowed now. The reader provides a method called take, which passes all available data updates to the invoker. Available data is formatted in sequences, specifically sensor_informationSeq.

    sensor_informationSeq data;
    DDS_SampleInfoSeq info;
    DDS_ReturnCode_t ret = sensorReader->take(
        data, info, DDS_LENGTH_UNLIMITED,
        DDS_ANY_SAMPLE_STATE, DDS_ANY_VIEW_STATE, DDS_ANY_INSTANCE_STATE);

    if (ret == DDS_RETCODE_NO_DATA) {
        qDebug() << "No data, continue...";
        return;
    } else if (ret != DDS_RETCODE_OK) {
        qDebug() << "Could not receive data:" << ret;
        return;
    }

    for (int i = 0; i < data.length(); ++i) {
        if (info[i].valid_data) {
            qDebug() << data[i];
        } else {
            qDebug() << "Received Metadata on:" << i;
        }
    }

Note that the sequences contain all updates to all sensors. We are not filtering to one specific sensor.

Once we are done with processing the data, we have to return it to the reader.

    ret = sensorReader->return_loan(data, info);

One optimization capability in DDS is to use zero-copy data. This implies, that the internal representation of the data is passed to the developer to reduce creating copies. That can be important when the data size is big.

Again, the source is located here. When running the application, it is important to note that the NDDSHOME environment variable needs to be specified to create a virtual mesh network for experimentation.

As a last step, we want to integrate the publisher with a QML application. This demo is also available in the repository.

dds_qml_screenshot

We are going to re-use the principles from our protobuf examples in the previous post of this series. Basically, we create a SensorInformation class, which holds a sensor_information member being created from DDS.

class SensorInformation : public QObject
{
    Q_OBJECT
    Q_PROPERTY(double ambientTemperature READ ambientTemperature WRITE setAmbientTemperature NOTIFY ambientTemperatureChanged)
    Q_PROPERTY(double objectTemperature READ objectTemperature WRITE setObjectTemperature NOTIFY objectTemperatureChanged)
[…]
    void init();
    void sync();
private:
    sensor_information *m_info;
    sensor_informationDataWriter *m_sensorWriter;
    DDS_InstanceHandle_t m_handle;
    QString m_id;
};

In this very basic example, init() will initialize the participant, publisher and datawriter similar to the steps described above.

We added a function sync(), which will sync to the current data state to all subscribers.

void SensorInformation::sync()
{
    DDS_ReturnCode_t ret = m_sensorWriter->write(*m_info, m_handle);
    if (ret != DDS_RETCODE_OK) {
        qDebug() << "Could not write data.";
    }

}

 

Sync() is invoked whenever a property changes, for instance

void SensorInformation::setAmbientTemperature(double ambientTemperature)
{
    if (qFuzzyCompare(m_info->ambientTemperature, ambientTemperature))
        return;

    m_info->ambientTemperature = ambientTemperature;
    emit ambientTemperatureChanged(ambientTemperature);
    sync();
}

And that is all which needs to be done to integrate DDS into a Qt application. For the subscriber part, we could use the callback based listeners and update the objects accordingly. This is an exercise left for the reader.

While experimenting with DDS and Qt a couple of ideas came up. For instance, a QObject-based declaration could be parsed and for all properties, a matching IDL would be generated. That IDL would automatically be integrated and source code templates be created for the declaration, both publisher and subscriber. That would allow to use Qt only and integrate an IoT syncing mechanism out of the box, DDS in this case. What would you think about such an approach?

To summarize, the IoT world is full of different protocols and standards. Each has their pros and cons, and Qt cannot (and will not) provide an implementation for all of them. Recently, we have been focusing on MQTT and OPC UA (another blog post on this topic will come soon). But DDS is another good example of a technology used in the field. What we wanted to highlight with this post is that it is always possible to integrate other C++ technologies into Qt and use both side by side, sometimes even benefitting each other.

The post Building a Bridge from Qt to DDS appeared first on Qt Blog.

Viking Software Blog: Getting the most of signal/slot connections

$
0
0

Signals and slots were one of the distinguishing features that made Qt an exciting and innovative tool back in time. But sometimes you can teach new tricks to an old dog, and QObjects gained a new way to connect between signals and slots in Qt5, plus some extra features to connect to other functions which are not slots. Let’s review how to get the most of that feature. This assumes you are already moderately familiar with signals and slots.


Cutelyst Framework: Cutelyst on TechEmpower benchmarks round 16

$
0
0

Yesterday TechEmpower released the results for round 16 of their benchmarking tests, you can see their blog about it here. And like for round 15 I'd like add my commentary about it here.

Before you look into the results web site it's important to be aware of a few things, first round 16 runs on a new hardware newer and more powerful than the previous rounds, they also did a Dockerization of the tests which allowed us to pull different distro images, cache package install and isolate from other frameworks. So don't try to compare to round 15.

Having put Cutelyst under testing there has brought many benefits to it, in previous rounds we noticed that due testing on a server with many CPU cores it letting the operating system do the scheduling wouldn't be a good idea, so we added CPU affinity feature to cutelyst-wsgi, while uWSGI also has it our logic does this core pinning for threads and not only process.

While testing at home on an AMD Phenon II x4 using pre-fork mode was much faster than threaded mode, but on my 5th Gen Intel laptop the results were closer but threading was much better, and thanks to TFB I found out that each process were being assigned to CPU core 0, which made 28 processes be bound to a single core. In this new round you can now see the difference between running threaded or in pre-fork mode is negligible in some tests pre-fork is even faster. Pre-fork does consume more RAM, running 100 threads of Virtlyst uses around 35MiB, while 100 process around 130 MiB but multiple process are better if your code happens to crash.

An important feature of Tech Empower Benchmarks are it's filters, they allow you to filter stuff that doesn't matter to you, due the above fix you call look at the results and see the close threads vs pre-fork match here.

Using an HTML templating engine is very important for real world apps, in round 16 I've added a test that uses Grantlee for rendering, they are around 46% slower than creating the HTML directly on C++, there might be room for improvements in Grantlee but the fortunes results aren't bad.

Some people asked why there were some many occurrences of Cutelyst in the tests, the reason is that these benchmarks allow you to tests some features and see what performs better, in round 15 it became clear that using EPoll vs Qt's glib based event loop was a clear win, so in 16 we don't have Qt's glib tested anymore and EPoll is now the default event dispatcher in Cutelyst 2 on Linux.

This round however I tried to make a reasoning about TCP_NODELAY and while results are closer it's a bit clear that due the blocking nature of the SQL tests TCP_NODELAY will decrease latency and increase performance a bit due it not waiting to send a bigger TCP packet.

As I also mentioned on the release of Cutelyst 2.4.0 a fix was made to avoid stack overflow and you can see the results on the "Plain Text" test, before that fix cutelyst was crashing and respawning and that limited the results to 1,000,000 requests/second, after the fix it is 2,800,000 requests/second.

This round used Ubuntu 18.04 as base, Cutelyst 2.4.0 and Qt 5.9.5.

Last but not least if you still try to compare to round 15 :) you might notice Cutelyst went lower on the ranking, that's in part due frameworks getting optimizations but mostly due new micro/platform frameworks, but you can enable the Fullstack classification filter if you want to compare frameworks that would provide similar features that what Cutelyst offers.

Woboq: Integrating QML and Rust: Creating a QMetaObject at Compile Time

$
0
0

In this blog post, I would like to present a research project I have been working on: Trying to use QML from Rust, and in general, using a C++ library from Rust.

The project is a Rust crate which allows to create QMetaObject at compile time from pure Rust code. It is available here: https://github.com/woboq/qmetaobject-rs

Qt and Rust

There were already numerous existing projects that attempt to integrate Qt and Rust. A great GUI toolkit should be working with a great language.

As far back as 2014, the project cxx2rust tried to generate automatic bindings to C++, and in particular to Qt5. The blog post explain all the problems. Another project that automatically generate C++ bindings for Qt is cpp_to_rust. I would not pursue this way of automatically create bindings because it cannot not produce a binding that can be used from idiomatic Rust code, without using unsafe.

There is also qmlrs. The idea here is to develop manually a small wrapper C++ library that exposes extern "C" functions. Then a Rust crate with a good and safe API can internally call these wrapper.
Similarly, the project qml-rust do approximately the same, but uses the DOtherSide bindings as the Qt wrapper library. The same used for D and Nim bindings for QML.
These two projects only concentrate on QML and not QtWidget or the whole of Qt. Since the API is then much smaller, this simplifies a lot the fastidious work of creating the bindings manually. Both these projects generate a QMetaObject at runtime from information given from rust macro. Also you cannot use any type as parameter for your property or method arguments. You are limited to convert to built-in types.

Finally, there is Jos van den Oever's Rust Qt Binding Generator. To use this project, one have to write a JSON description of the interface one wants to expose, then the generator will generate the rust and C++ glue code so you can easily call rust from your Qt C++/Qml application.
What I think is a problem is that you are still expected to write some C++ and add an additional step in your build system. That is perfectly fine if you want to add Rust to an existing C++ project, but not if you just want a GUI for a Rust application. Also writing this JSON description is a bit alien.

I started the qmetaobject crate mainly because I wanted to create the QMetaObject at rust compile time. The QMetaObject is a data structure which contains all the information about a class deriving from QObject (or Q_GADGET) so the Qt runtime can connect signals with slots, or read and write properties. Normally, the QMetaObject is built at compile time from a C++ file generated by moc, Qt's meta object compiler.
I'm a fan of creating QMetaObject: I am contributing to Qt, and I also wrote moc-ng and Verdigris which are all about creating QMetaObject. Verdigris uses the power of C++ constexpr to create the QMetaObject at compile time, and I wanted to try using Rust to see if it could also be done at compile time.

The qmetaobject crate

The crate uses a custom derive macro to generate the QMetaObject. Custom derive works by adding an annotation in front of a rust struct such as #[derive(QObject)] or #[derive(QGadget)]. Upon seeing this annotation, the rustc compiler will call the function from the qmetaobject_impl crate which implements the custom derive. The function has the signature fn(input : TokenStream) -> TokenStream. It will be called at compile time, and takes as input the source code of the struct it derives and should generate more source code that will then be compiled.
What we do in this custom derive macro is first parse the content of the struct and find about some annotations. I've used a set of macro such as qt_property!, qt_method! and so on, similar to Qt's C++ macro. I could also have used custom attributes but I choose macro as it seemed more natural coming from the Qt world (but perhaps this should be revised).

Let's simply go over a dummy example of using the crate.

extern crate qmetaobject;
use qmetaobject::*; // For simplicity

// Deriving from QObject will automatically implement the QObject trait and
// generates QMetaObject through the custom derive macro.
// This is equivalent to add the Q_OBJECT in Qt code.
#[derive(QObject,Default)]
struct Greeter {
  // We need to specify a C++ base class. This is done by specifying a
  // QObject-like trait. Here we can specify other QObject-like trait such
  // as QAbstractListModel or QQmlExtensionPlugin.
  // The 'base' field is in fact a pointer to the C++ QObject.
  base : qt_base_class!(trait QObject),
  // We declare the 'name' property using the qt_property! macro.
  name : qt_property!(QString; NOTIFY name_changed),
  // We declare a signal. The custom derive will automatically create
  // a function of the same name that can be called to emit it.
  name_changed : qt_signal!(),
  // We can also declare invokable method.
  compute_greetings : qt_method!(fn compute_greetings(&self, verb : String) -> QString {
      return (verb + " " + &self.name.to_string()).into()
  })
}

fn main() {
  // We then use qml_register_type as an equivalent to
  qml_register_type::(cstr!("Greeter"), 1, 0, cstr!("Greeter"));
  let mut engine = QmlEngine::new();
  engine.load_data(r#"
    import QtQuick 2.6; import QtQuick.Window 2.0; import Greeter 1.0;
    Window {
      visible: true;
      // We can instantiate our rust object here.
      Greeter { id: greeter; name: 'World'; }
      // and use it by accessing its property or method.
      Text { text: greeter.compute_greetings('hello'); }
    }"#.into());
  engine.exec();
}

In this example, we used qml_register_type to register the type to QML, but we can also also set properties on the global context. An example with this model, which also demonstrate QGadget

// derive(QGadget) is the equivalent of Q_GADGET.
#[derive(QGadget,Clone,Default)]
struct Point {
  x: qt_property!(i32),
  y: qt_property!(i32),
}

#[derive(QObject, Default)]
struct Model {
  // Here the C++ class will derive from QAbstractListModel
  base: qt_base_class!(trait QAbstractListModel),
  data: Vec
}

// But we still need to implement the QAbstractListModel manually
impl QAbstractListModel for Model {
  fn row_count(&self) -> i32 {
    self.data.len() as i32
  }
  fn data(&self, index: QModelIndex, role:i32) -> QVariant {
    if role != USER_ROLE { return QVariant::default(); }
    // We use the QGadget::to_qvariant function
    self.data.get(index.row() as usize).map(|x|x.to_qvariant()).unwrap_or_default()
  }
  fn role_names(&self) -> std::collections::HashMap {
    vec![(USER_ROLE, QByteArray::from("value"))].into_iter().collect()
  }
}

fn main() {
  let mut model = Model { data: vec![ Point{x:1,y:2} , Point{x:3, y:4} ], ..Default::default() };
  let mut engine = QmlEngine::new();
  // Registers _model as a context property.
  engine.set_object_property("_model".into(), &mut model);
  engine.load_data(r#"
    import QtQuick 2.6; import QtQuick.Window 2.0;
    Window {
      visible: true;
      ListView {
        anchors.fill: parent;
        model: _model;  // We reference our Model object
        // And we can access the property or method of our gadget
        delegate: Text{ text: value.x + ','+value.y; } }
    }"#.into());
  engine.exec();

Other implemented features include the creation of Qt plugin such as QQmlExtensionPlugin without writing a line of C++, only using rust and cargo. (See the qmlextensionplugins example.)

QMetaObject generation

The QMetaObject consists in a bunch of tables in the data section of the binary: a table of string, a table of integer. And there is also a function pointer with code used to read/write the property or call the methods.

The custom derive macro will generate the tables as &'static[u8]. The moc generated code contains QByteArrayData, built in C++, but since we don't want to use a C++ compiler to generate the QMetaObject, we have to layout all the bytes of the QByteArrayData one by one. Another tricky part is the creation of the Qt binary JSON for the plugin metadata. The Qt binary JSON is also an undocumented data structure which need to be built byte by byte, respecting many invariants such as alignment and order of the fields.

The code form the static_metacall is just an extern "C" fn. Then we can assemble all these pointer in a QMetaObject. We cannot create const static structure containing pointers. This is then implemented using the lazy_static! macro.

QObject Creation

Qt needs a QObject* pointer for our object. It has virtual methods to get the QMetaObject. The same applies for QAbstractListModel or any other class we could like to inherit from, which have many virtual method which we wish to override.

We will then will have to materialize an actual C++ object on the heap. This C++ counter part is created by some of the C++ glue code. We will store a pointer to this C++ counter part in the field annotated with the qt_base_class! macro. The glue code will instantiate a RustObject . It is a class that inherit from QObject (or any other QObject derivative) and override the virtual to forward them to a callback in rust which then be able to call the right function on the rust object.

One of the big problem is that in rust, contrary to C++, objects can be moved in memory at will. This is a big problem, as the C++ object contains a pointer to the rust object. So the rust object needs somehow to be fixed in memory. This can be achieved by putting it into a Box or a Rc, but even then, it is still possible to move the object in safe code. This problem is not entirely fixed, but the interface takes the object by value and move it to an immutable location. Then the object can still be accessed safely from a QJSValue object.

Note that QGadget does not need a C++ counter-part.

C++ Glue code

For this project I need a bit of C++ glue code to create the C++ counter part of my object, or to access the C++ API for Qt types or QML Api. I am using the cpp! macro from the cpp crate. This macro allows embedding C++ code directly into rust code with very little boiler plate compared to manually create callback and declaring extern "C" functions.
I even contributed a cpp_class macro which lets wrap C++ classes from rust.

Should an API be missing, it is easy to add the missing wrapper function. Also when we want to inherit from a class, we just need to imitate what is done for QAbstractListView, that is override all the virtual function we want to override, and forward them to the function from the trait.

Final Words

My main goal with this crate was to try to see if we can integrate QML with idiomatic and safe Rust code. Without requiring to use of C++ or any other alien tool for the developer. I also had performance in mind and wanted to create the QMetaObject at compile time and limit the amount of conversions or heap allocations.
Although there are still some problems to solve, and that the exposed API is far from complete. This is already a beginning.

wou can get the metaobject crate at this URL: https://github.com/woboq/qmetaobject-rs

The Qt Company Blog: Optimizing Device Communication with Qt MQTT

$
0
0

Qt for Automation has been launched in conjunction with Qt 5.10 as an AddOn to Qt for Application Development or Qt for Device Creation. One module in that offering is our client-side solution for MQTT called Qt MQTT.

Qt MQTT focusses on the client side, helping developers to create sensor devices and or gateways managing data to be sent via MQTT.  MQTT describes itself to be lightweight, open and simple. The principle idea is to reduce the protocol overhead as much as possible. The most used version of this protocol is MQTT 3.1.1, usually referred to as MQTT 4. The MQTT 5 standard has been agreed on recently, and we are working on adding support for it in Qt MQTT. But that will be part of another announcement later this year.

To verify the functionality of a module and also to provide guidelines for developers on its usage, The Qt Company created a demo which is called SensorTag. We will use this demo as a basis for this blog series.

A brief introduction can be viewed in this video:

The source code of this demo is located in our Boot 2 Qt demo repository here ( http://code.qt.io/cgit/qt-apps/boot2qt-demos.git/tree/tradeshow/iot-sensortag ).

Basically, this demo describes a scenario in which multiple sensors report data to a gateway (in this case Raspberry Pi3) via Bluetooth, which then again sends data to an MQTT broker in the cloud. Multiple gateways mesh up to generate a sensor network.

automation

Each sensor propagates updates to what they measure, more specifically

  • Ambient Temperature
  • Object Temperature
  • Acceleration
  • Orientation / angular velocity
  • Magnetism
  • Altitude
  • Light
  • Humidity

As the focus of this series is about data transmission, the Sensor / Gateway combination will be summed up as “device”.

The demo serves two purposes; It has a pleasant user interface, which appeals to many, and it showcases how easy it is to integrate MQTT (and Qt MQTT).

MQTT is a publish/subscribe protocol, which implies that data is sent related to a specific topic and other recipients register themselves to a broker (server) to receive notifications on each message published.

For the devices above, available messages are:

  • [Topic: Sensors/active, Data: ID]: Every 5 seconds a sensor publishes an “Online” message, notifying subscribers that the device is still active and sending data. On initial connect it also includes a Will message “Offline”. A will message is sent whenever a device disconnects to notify all subscribers with a “Last Will”. Hence, as soon as the network is disconnected, the broker broadcasts the last will to all subscribed parties.
  • [Topic: Sensor///, Data:]: This is send from each sensor whenever a value for a specific datatype like in above list changes. The user interface from the video subscribes to these messages and updates the graphics accordingly.

Side-note: One additional use-case could be to check the temperature of all devices. In that case, “wildcard” subscriptions can be very useful. Subscribing to “Sensor/+/temperature will lead to receiving temperature updates from all sensors.

Currently, there are around 10 sensors registered at the same time at peak times, mostly when the demo is presented at tradeshows. As mentioned above, demo code exists to showcase an integration, not necessarily the most performant or energy-saving solution. When going towards production, a couple of questions need to be asked:

  • What would happen in a real-world scenario? Would the demo scale up to thousands of devices reporting data?
  • Does the demo fit to requirements on hardware and/or battery usage?
  • Is it within the boundaries of communication limitations of Low Power WAN solutions?

In case you are not familiar with LPWAN solutions like LoRA or Sigfox, I highly recommend Massimo’s talk (https://www.youtube.com/watch?v=s9h7osaSWeU ) at the Qt World Summit 2017.

 

To simplify the source to look at and to reduce it to a non-UI minimal example, there is a minimal representation available here . In that the SensorInformation class has a couple of properties, which get updated frequently:

class SensorInformation : public QObject

{
    Q_OBJECT
    Q_PROPERTY(double ambientTemperature READ ambientTemperature WRITE setAmbientTemperature NOTIFY ambientTemperatureChanged)
    Q_PROPERTY(double objectTemperature READ objectTemperature WRITE setObjectTemperature NOTIFY objectTemperatureChanged)
    Q_PROPERTY(double accelerometerX READ accelerometerX WRITE setAccelerometerX NOTIFY accelerometerXChanged)
    Q_PROPERTY(double accelerometerY READ accelerometerY WRITE setAccelerometerY NOTIFY accelerometerYChanged)
    Q_PROPERTY(double accelerometerZ READ accelerometerZ WRITE setAccelerometerZ NOTIFY accelerometerZChanged)
    Q_PROPERTY(double altitude READ altitude WRITE setAltitude NOTIFY altitudeChanged)
    Q_PROPERTY(double light READ light WRITE setLight NOTIFY lightChanged)
    Q_PROPERTY(double humidity READ humidity WRITE setHumidity NOTIFY humidityChanged)
[..]

 

Whenever a property is updated the client publishes a message:

    m_client->publish(QString::fromLatin1("qtdemosensors/%1/ambientTemperature").arg(m_id),
                      QByteArray::number(ambientTemperature));

 

The device update rate depends on the type of sensors, items like temperature and light intensity are updated less frequently than, for instance, acceleration.

To get an idea of how much data is sent, example Part1B hooks into the client’s transport capabilities. MQTT in its standard has only a limited number of requirements for the transport. Whatever transmits the data must send it ordered, lossless and bi-directional. Theoretically, anything starting from QIODevice is capable of this. QMqttClient allows to specify a custom transport via QMqttClient::setTransport(). Here, we use the following:

 

class LoggingTransport : public QTcpSocket
{
public:
    LoggingTransport(QObject *parent = nullptr);

protected:
     qint64 writeData(const char *data, qint64 len) override;
private:
    void printStatistics();
    QTimer *m_timer;
    QMutex *m_mutex;
    int m_dataSize{0};
};

Inside the constructor a timer is created to invoke printStatistics() at a regular interval. writeData simply stores the length of data to be send and then passes on to QTcpSocket::writeData().

To add the LoggingTransport to a client, all one needs to do is:

    m_transport = new LoggingTransport(this);
    m_client = new QMqttClient(this);
    m_client->setTransport(m_transport, QMqttClient::AbstractSocket);

The output shows that after 11 seconds one sensor sends about 100KB of data. This is just one sensor. Considering the “real-world” example this is clearly unacceptable for distribution. Hence, the effort is now to reduce the number of bytes to be published without losing information.

The demo itself uses one connection at all times, meaning it does not need to dis- or reconnect. The connect statement in MQTT 3.1.1 is around 10 bytes plus the ID of the client. In case a device would reconnect each time to send data, this can add up to a significant amount. But in this case, we “optimized” it already.

Consequently, we move on to the publishing step. There are two options, reduce the size of each message and reduce the number of messages.

For the former, the design of a publish message for ambient temperature is the following:

BytesContent
1Publish Statement & Configuration
1Remaining Length of the message
2Length of the topic
71Topic: qtdemosensors/{8f8fde60-933d-44cf-b3a7-8dac62425a63}/ambientTemperature
2ID of the message
1..8Value (String for double)

It is obvious that the topic itself is to be taken responsible for the size of the message, especially as the payload storing the value is just a fraction of the size of a message.

Concepts which can be applied here are:

  • Shorten the “root” topic (qtdemosensors -> qtds)
  • Shrink the size of the ID (UUID -> 8 digit number)
  • Replace the clear text of sensortype with an enum (ambientTemperature -> 1)

All those approaches come with a price. Using enums instead of clear text property reduces the readability of a message, the subscriber always has to know what type of data corresponds to an ID. Reducing the size of the ID potentially limits the maximum number of connected devices, etc. But applying those items leads to more than three times longer period until the 100KB barrier is reached (See example Part 1C).

At this stage, it becomes impossible to reduce the message overhead without losing information. In the showcase, I mentioned that the simulated sensors are to be used in the field, potentially sending data via LPWAN. Typically, sensors in this area should not send data on such a high frequency. If that is the case, there are two additional options.

First, the messages are getting combined to one single message containing all sensor properties. The table above showed that the amount of data for the value part has just been a fraction of the message overhead.

One approach is to use JSON to propagate properties, this can be seen in example Part 1d. QJsonObject and QJsonDocument are very handy to use, and QJsonDocument::toJson() exports the content to a QByteArray, which fits perfectly to an MQTT message.

 

void SensorInformation::publish()
{
    QJsonObject jobject;
    jobject["AmbientTemperature"] = QString::number(m_ambientTemperature);
    jobject["ObjectTemperature"] = QString::number(m_objectTemperature);
    jobject["AccelerometerX"] = QString::number(m_accelerometerX);
    jobject["AccelerometerY"] = QString::number(m_accelerometerY);
    jobject["AccelerometerZ"] = QString::number(m_accelerometerZ);
    jobject["Altitude"] = QString::number(m_altitude);
    jobject["Light"] = QString::number(m_light);
    jobject["Humidity"] = QString::number(m_humidity);
    QJsonDocument doc( jobject );
    m_client->publish(QString::fromLatin1("qtds/%1").arg(m_id), doc.toJson());
}

 

The size of an MQTT publish message is now at around 272 bytes including all information. As mentioned before, it comes at the cost of losing information, but also at significantly reducing the bandwidth required.

To summarize this first part, we have looked into various solutions to reduce the amount of data being sent from a device to an MQTT broker with minimal effort. Some approaches trade off readability or extensibility, others come with a loss of information before data transmission. It really depends on the use-case and the scenario in which an IoT solution is put into production. But all of these can easily be implemented with Qt. Until now, we have only covered Qt’s feature set on JSON, MQTT, networking, and connectivity via Bluetooth.

In the meantime, you can also find more information on our website and write your comments below.

The post Optimizing Device Communication with Qt MQTT appeared first on Qt Blog.

KDAB on Qt: QtDay Italy, May 23-24

$
0
0

Once more in Florence, QtDay Italy is almost upon us and we’ll be there.

Giuseppe D’Angelo will be giving a talk on Gammaray, KDAB’s open source profiling and debugging tool.

GammaRay is a “high-level debugger”, able to show the status of various Qt subsystems inside an application, with an easy to use interface (that does not require any knowledge of Qt internals). GammaRay can, amongst other things, show the properties of any QObject in the application, show the status of all state machines, inspect and debug layouting and stacking of both widgets and Qt Quick applications, analyze raster painting, show the 3D geometry in a Qt3D scene, and much more.

This talk introduces the basics of GammaRay usage, with some interesting cases as examples of problem solutions.

See you in Florence!

The post QtDay Italy, May 23-24 appeared first on KDAB.

KDAB on Qt: KDAB at SIGGRAPH 2018

$
0
0

Yes, folks. This year SIGGRAPH 2018 is in Canada and we’ll be there at the Qt booth, showing off our latest tooling and demos. These days, you’d be surprised where Qt is used under the hood, even by the biggest players in the 3D world!

SIGGRAPH 2018 is a five-day immersion into the latest innovations in CG, Animation, VR, Games, Digital Art, Mixed Reality and Emerging Technologies. Experience research, hands-on demos, and fearless acts of collaboration.

Meet us at SIGGRAPH 2018!

Book your place for the most exciting 3D event of the year!

The post KDAB at SIGGRAPH 2018 appeared first on KDAB.

KDAB on Qt: KDAB at Qt Contributor’s Summit 2018, Oslo

$
0
0

KDAB is a major sponsor of this event and a key independent contributor to Qt as our blogs attest.

Every year, dedicated Qt contributors gather at Qt Contributors’ Summit to share with their peers latest knowledge and best practices, ensuring that the Qt framework stays at the top of its game. Be a Contributor to Qt!

We look forward to meeting you in Oslo!

The post KDAB at Qt Contributor’s Summit 2018, Oslo appeared first on KDAB.

Woboq: Integrating QML and Rust: Creating a QMetaObject at Compile Time

$
0
0

In this blog post, I would like to present a research project I have been working on: Trying to use QML from Rust, and in general, using a C++ library from Rust.

The project is a Rust crate which allows to create QMetaObject at compile time from pure Rust code. It is available here: https://github.com/woboq/qmetaobject-rs

Qt and Rust

There were already numerous existing projects that attempt to integrate Qt and Rust. A great GUI toolkit should be working with a great language.

As far back as 2014, the project cxx2rust tried to generate automatic bindings to C++, and in particular to Qt5. The blog post explain all the problems. Another project that automatically generate C++ bindings for Qt is cpp_to_rust. I would not pursue this way of automatically create bindings because it cannot produce a binding that can be used from idiomatic Rust code, without using unsafe.

There is also qmlrs. The idea here is to develop manually a small wrapper C++ library that exposes extern "C" functions. Then a Rust crate with a good and safe API can internally call these wrappers.
Similarly, the project qml-rust does approximately the same, but uses the DOtherSide bindings as the Qt wrapper library. The same used for D and Nim bindings for QML.
These two projects only concentrate on QML and not QtWidget nor the whole of Qt. Since the API is then much smaller, this simplifies a lot the fastidious work of creating the bindings manually. Both these projects generate a QMetaObject at runtime from information given by rust macros. Also you cannot use any type as parameter for your property or method arguments. You are limited to convert to built-in types.

Finally, there is Jos van den Oever's Rust Qt Binding Generator. To use this project, one has to write a JSON description of the interface one wants to expose, then the generator will generate the rust and C++ glue code so that you can easily call rust from your Qt C++/Qml application.
What I think is a problem is that you are still expected to write some C++ and add an additional step in your build system. That is perfectly fine if you want to add Rust to an existing C++ project, but not if you just want a GUI for a Rust application. Also writing this JSON description is a bit alien.

I started the qmetaobject crate mainly because I wanted to create the QMetaObject at rust compile time. The QMetaObject is a data structure which contains all the information about a class deriving from QObject (or Q_GADGET) so the Qt runtime can connect signals with slots, or read and write properties. Normally, the QMetaObject is built at compile time from a C++ file generated by moc, Qt's meta object compiler.
I'm a fan of creating QMetaObject: I am contributing to Qt, and I also wrote moc-ng and Verdigris which are all about creating QMetaObject. Verdigris uses the power of C++ constexpr to create the QMetaObject at compile time, and I wanted to try using Rust to see if it could also be done at compile time.

The qmetaobject crate

The crate uses a custom derive macro to generate the QMetaObject. Custom derive works by adding an annotation in front of a rust struct such as #[derive(QObject)] or #[derive(QGadget)]. Upon seeing this annotation, the rustc compiler will call the function from the qmetaobject_impl crate which implements the custom derive. The function has the signature fn(input : TokenStream) -> TokenStream. It will be called at compile time, and takes as input the source code of the struct it derives and should generate more source code that will then be compiled.
What we do in this custom derive macro is first to parse the content of the struct and find about some annotations. I've used a set of macros such as qt_property!, qt_method! and so on, similar to Qt's C++ macros. I could also have used custom attributes but I chose macros as it seemed more natural coming from the Qt world (but perhaps this should be revised).

Let's simply go over a dummy example of using the crate.

extern crate qmetaobject;
use qmetaobject::*; // For simplicity

// Deriving from QObject will automatically implement the QObject trait and
// generates QMetaObject through the custom derive macro.
// This is equivalent to add the Q_OBJECT in Qt code.
#[derive(QObject,Default)]
struct Greeter {
  // We need to specify a C++ base class. This is done by specifying a
  // QObject-like trait. Here we can specify other QObject-like traits such
  // as QAbstractListModel or QQmlExtensionPlugin.
  // The 'base' field is in fact a pointer to the C++ QObject.
  base : qt_base_class!(trait QObject),
  // We declare the 'name' property using the qt_property! macro.
  name : qt_property!(QString; NOTIFY name_changed),
  // We declare a signal. The custom derive will automatically create
  // a function of the same name that can be called to emit it.
  name_changed : qt_signal!(),
  // We can also declare invokable methods.
  compute_greetings : qt_method!(fn compute_greetings(&self, verb : String) -> QString {
      return (verb + " " + &self.name.to_string()).into()
  })
}

fn main() {
  // We then use qml_register_type as an equivalent to qmlRegisterType
  qml_register_type::(cstr!("Greeter"), 1, 0, cstr!("Greeter"));
  let mut engine = QmlEngine::new();
  engine.load_data(r#"
    import QtQuick 2.6; import QtQuick.Window 2.0; import Greeter 1.0;
    Window {
      visible: true;
      // We can instantiate our rust object here.
      Greeter { id: greeter; name: 'World'; }
      // and use it by accessing its property or method.
      Text { text: greeter.compute_greetings('hello'); }
    }"#.into());
  engine.exec();
}

In this example, we used qml_register_type to register the type to QML, but we can also also set properties on the global context. An example with this model, which also demonstrate QGadget

// derive(QGadget) is the equivalent of Q_GADGET.
#[derive(QGadget,Clone,Default)]
struct Point {
  x: qt_property!(i32),
  y: qt_property!(i32),
}

#[derive(QObject, Default)]
struct Model {
  // Here the C++ class will derive from QAbstractListModel
  base: qt_base_class!(trait QAbstractListModel),
  data: Vec
}

// But we still need to implement the QAbstractListModel manually
impl QAbstractListModel for Model {
  fn row_count(&self) -> i32 {
    self.data.len() as i32
  }
  fn data(&self, index: QModelIndex, role:i32) -> QVariant {
    if role != USER_ROLE { return QVariant::default(); }
    // We use the QGadget::to_qvariant function
    self.data.get(index.row() as usize).map(|x|x.to_qvariant()).unwrap_or_default()
  }
  fn role_names(&self) -> std::collections::HashMap {
    vec![(USER_ROLE, QByteArray::from("value"))].into_iter().collect()
  }
}

fn main() {
  let mut model = Model { data: vec![ Point{x:1,y:2} , Point{x:3, y:4} ], ..Default::default() };
  let mut engine = QmlEngine::new();
  // Registers _model as a context property.
  engine.set_object_property("_model".into(), &mut model);
  engine.load_data(r#"
    import QtQuick 2.6; import QtQuick.Window 2.0;
    Window {
      visible: true;
      ListView {
        anchors.fill: parent;
        model: _model;  // We reference our Model object
        // And we can access the property or method of our gadget
        delegate: Text{ text: value.x + ','+value.y; } }
    }"#.into());
  engine.exec();

Other implemented features include the creation of Qt plugins such as QQmlExtensionPlugin without writing a line of C++, only using rust and cargo. (See the qmlextensionplugins example.)

QMetaObject generation

The QMetaObject consists in a bunch of tables in the data section of the binary: a table of string and a table of integer. And there is also a function pointer with code used to read/write the properties or call the methods.

The custom derive macro will generate the tables as &'static[u8]. The moc generated code contains QByteArrayData, built in C++, but since we don't want to use a C++ compiler to generate the QMetaObject, we have to layout all the bytes of the QByteArrayData one by one. Another tricky part is the creation of the Qt binary JSON for the plugin metadata. The Qt binary JSON is also an undocumented data structure which needs to be built byte by byte, respecting many invariants such as alignment and order of the fields.

The code from the static_metacall is just an extern "C" fn. Then we can assemble all these pointers in a QMetaObject. We cannot create const static structure containing pointers. This is then implemented using the lazy_static! macro.

QObject Creation

Qt needs a QObject* pointer for our object. It has virtual methods to get the QMetaObject. The same applies for QAbstractListModel or any other class we could like to inherit from, which has many virtual methods that we wish to override.

We will then have to materialize an actual C++ object on the heap. This C++ counterpart is created by some of the C++ glue code. We will store a pointer to this C++ counterpart in the field annotated with the qt_base_class! macro. The glue code will instantiate a RustObject . It is a class that inherits from QObject (or any other QObject derivative) and overrides the virtual to forward them to a callback in rust which will then be able to call the right function on the rust object.

One of the big problems is that in rust, contrary to C++, objects can be moved in memory at will. This is a big problem, as the C++ object contains a pointer to the rust object. So the rust object needs somehow to be fixed in memory. This can be achieved by putting it into a Box or a Rc, but even then, it is still possible to move the object in safe code. This problem is not entirely fixed, but the interface takes the object by value and moves it to an immutable location. Then the object can still be accessed safely from a QJSValue object.

Note that QGadget does not need a C++ counter-part.

C++ Glue code

For this project I need a bit of C++ glue code to create the C++ counter part of my object, or to access the C++ API for Qt types or QML API. I am using the cpp! macro from the cpp crate. This macro allows embedding C++ code directly into rust code with very little boiler plate compared to manually creating callbacks and declaring extern "C" functions.
I even contributed a cpp_class macro which allows wrapping C++ classes from rust.

Should an API be missing, it is easy to add the missing wrapper function. Also when we want to inherit from a class, we just need to imitate what is done for QAbstractListView, that is override all the virtual functions we want to override, and forward them to the function from the trait.

Final Words

My main goal with this crate was to try to see if we can integrate QML with idiomatic and safe Rust code. Without requiring to use of C++ or any other alien tool for the developer. I also had performance in mind and wanted to create the QMetaObject at compile time and limit the amount of conversions or heap allocations.
Although there are still some problems to solve, and that the exposed API is far from complete, this is already a beginning.

You can get the metaobject crate at this URL: https://github.com/woboq/qmetaobject-rs


The Qt Company Blog: Qt Creator 4.6.2 released

$
0
0

We are happy to announce the release of Qt Creator 4.6.2!

This fixes reparsing of QMake projects, for example when project files change, and a couple of other issues.
Have a look at our change log for more details.

Get Qt Creator 4.6.2

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.6.2 is also available through an update in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.6.2 released appeared first on Qt Blog.

The Qt Company Blog: Qt 5.9.6 Released

$
0
0

I am pleased to announce that Qt 5.9.6 is released today. As a patch release Qt 5.9.6 does not add any new functionality, but provides important bug fixes and other improvements.

With Qt 5.9.6 we are also adding binary installers for QNX 7. Qt 5.9 has supported QNX 7 from the very beginning, but since we have only offered binaries for QNX 6.6, there have been some confusion if QNX 7 is supported or not. Now there are binaries for both QNX 7 and QNX 6.6, which both are also fully supported with Qt 5.9. For Qt 5.9.6 the QNX binaries are available as offline installers for those holding a valid commercial license.

Compared to Qt 5.9.5, the new Qt 5.9.6 contains 33 bug fixes. In total there are around 195 changes in Qt 5.9.6 compared to Qt 5.9.5. For details of the most important changes, please check the Change files of Qt 5.9.6.

Qt 5.9 LTS entered ‘Strict’ phase in the beginning of February 2018 and while Qt 5.9.5 contained some fixed done before entering the Strict phase, Qt 5.9.6 is the first one to receive only the changes done during the Strict phase. Going forward Qt 5.9 continues to receive important bug fixes and significant performance fixes during the ‘Strict’ phase. Intention is to reduce risk of regressions and behavior changes by restricting the changes to the most important ones. We also continue to create new Qt 5.9.x patch releases, but with slower cadence than earlier.

Qt 5.9.6 can be updated to using the maintenance tool of the online installer. For new installations, please download latest online installer from Qt Account portal or from qt.io Download page. Offline packages are available for commercial users in the Qt Account portal and at the qt.io Download page for open-source users. You can also try out the Commercial evaluation option from the qt.io Download page.

The post Qt 5.9.6 Released appeared first on Qt Blog.

Sune Vuorela: Kirigaming – Kolorfill

$
0
0

Last time, I was doing a recipe manager. This time I’ve been doing a game with javascript and QtQuick, and for the first time dipping my feet into the Kirigami framework.

I’ve named the game Kolorfill, because it is about filling colors. It looks like this:

Kolorfill

The end goal is to make the board into one color in as few steps as possible. The way to do it is “Paint bucket”-tool from top left corner with various colors.

But enough talk. Let’s see some code:
https://cgit.kde.org/scratch/sune/kolorfill.git/

And of course, there is some QML tests for the curious.
A major todo item is saving the high score and getting that to work. Patches welcome. Or pointer to what QML components that can help me with that.

V-Play Engine: Release 2.17.0: Firebase Cloud Storage, Downloadable Resources at Runtime and Native File Access on All Platforms

$
0
0

V-Play 2.17.0 introduces Firebase Cloud Storage to store local files in the cloud easily & fast. You can use it to create photo or document sharing apps without any server-side code, or even the next Facebook? It also adds downloadable resources at runtime to reduce initial package size, or to load specific resources only on demand. FileUtils give you convenient access to the native device file system. You can check out two new app demos for using C++ with QML, use new Qt modules with V-Play Live and much more. This is a big update, don’t miss it!

New Firebase Storage to Upload Local Files to the Cloud

With the new FirebaseStorage item you can upload files to the Firebase Cloud Storage. It uploads local files to the cloud file system and returns the public download URL. With Firebase, you can create content sharing apps like Facebook or Snapchat without additional server-side code.

Examples for local files you can upload are:

Here is a code example, that shows how to upload an image taken with the camera. After the image is uploaded, we display it in the app.

vplay-camera-picker-and-firebase-cloud-storage-upload

NOTE: This example uses a public Firebase storage instance, don’t upload any sensitive data!

import QtQuick 2.0
import VPlayApps 1.0
import VPlayPlugins 1.0

App {
  
  NavigationStack {
    Page {
      title: "Firebase Storage"
      
      FirebaseStorage {
        id: storage
        
        config: FirebaseConfig {
          id: customConfig
          
          projectId: "v-play-live-client-test-db"
          databaseUrl: "https://v-play-live-client-test-db.firebaseio.com"
          storageBucket: "v-play-live-client-test-db.appspot.com"
          
          //platform dependent - get these values from the google-services.json / GoogleService-info.plist
          apiKey: Qt.platform.os === "android" ? "AIzaSyD3Pmw89NHhdG9nGIQWwaOB55FuWjcDSS8" : "AIzaSyCheT6ZNFI4mUwfrPRB098a08dVzlhZNME"
          applicationId: Qt.platform.os === "android" ? "1:40083798422:android:ed7cffdd1548a7fa"  : "1:40083798422:ios:ed7cffdd1548a7fa"
          
        }
      }
      
      AppFlickable {
        anchors.fill: parent
        
        Column {
          width: parent.width
          anchors.margins: dp(12)
          
          AppButton {
            text: "Capture image + upload"
            onClicked: nativeUtils.displayCameraPicker()
          }
          
          AppText {
            id: status
            text: "Idle"
          }
          
          // this will display the image after it's uploaded
          AppImage {
            id: img
            width: parent.width
            fillMode: AppImage.PreserveAspectFit
            autoTransform: true
          }
        }
      }
    }
  }
  
  Connections {
    target: nativeUtils
    onCameraPickerFinished: {
      if(accepted) {
        //picture taken with camera is stored at path - upload to Firebase Storage
        storage.uploadFile(path, "test-image" + Date.now() + ".png", function(progress, finished, success, downloadUrl) {
          if(!finished) {
            status.text = "Uploading... " + progress.toFixed(2) + "%"
          } else if(success) {
            img.source = downloadUrl
            status.text = "Upload completed."
          } else {
            status.text = "Upload failed."
          }
        })
      }
    }
  }
}

Download Resources at Runtime

DownloadableResource allows downloading app and game assets on demand during runtime. You no longer need to include all resources like images or videos in the app binary. This results in smaller downloads from the app stores. Cut down your 500MB training app to e.g. only 30MB, and let the user download your workout videos on demand!

The most popular use cases for downloadable packages are:

  • You want to keep your app store binary as small as possible for the first download, to increase the download numbers of your app or game with a smaller download size.
  • You want to download additional content packages after in-app purchases.
  • Keep your initial app size below the store limits:
    • On Google Play, your initial apk size must be below 100MB, after that you need to use Android expansion files. To avoid that, you can just use DownloadableResource and download the additional files at a later time.
    • On iOS, your initial binary size limit is 150MB for mobile network downloads. If your binary is bigger, the user can only download your app over WiFi. Downloading additional resources later also helps you to avoid this limit.
    • With the V-Play DownloadableResource component, you can create a cross-platform solution to work with downloadable resources, that works for both iOS AND Android. It even works on Desktop too, with a single source code for all platforms! This way, you do not need to deal with Android expansion files and can create a working solution for all platforms instead.

Here is a small example how you could use it: it downloads and extracts a zip archive including an image to the default location after 5 seconds. Then it replaces the placeholder image with the downloaded image:

import VPlayApps 1.0
import QtQuick 2.0
import VPlay 2.0

App {
  
  // uncomment this to remove the resources on startup, so you can test the downloading again
  //Component.onCompleted: resource1.remove()
  
  // after 5 seconds, we download the resources
  Timer {
    running: true
    interval: 5000
    onTriggered: {
      resource1.download()
    }
  }
  
  NavigationStack {
    Page {
      title: "Downloadable Resource"
      
      DownloadableResource {
        id: resource1
        
        extractAsPackage: true // true for zip archives
        source: "https://v-play.net/web-assets/girl.zip"
      }
      
      AppImage {
        width: parent.width
        fillMode: AppImage.PreserveAspectFit
        // as long as the resource file is not available, we use a placeholder image
        // (the example placeholder is actually also from a web url, to be usable with the web editor)
        // if the resource is available, we get the extracted file url and set it as new image source
        // on your next app start (or live reload) the resource will be available immediately and not downloaded again
        source: resource1.available ? resource1.getExtractedFileUrl("girl.jpg") : "https://v-play.net/web-assets/balloon.png"
      }
    }
  }
}

You have full information about the download, with properties like status, progress and available. You know exactly when resources are available or when to show a loading indicator.

DownloadableResource can load files from any HTTP(S) web addresses. You can add a secret to protect and restricts downloads to your app or game only. You can download single files or entire .zip-archives, which are automatically extracted.

Once a resource is downloaded, you can use it like any other asset. On your next app start, the resource will be available right away.

FileUtils Class for Cross-Platform Native File Access

You can use the new FileUtils context property to open, read, copy or delete files and folders on any device.

This is an example to download a PDF file and then open it with the native PDF viewer application, using FileUtils::openFile():

vplay-download-pdf-and-open-native-viewer

import VPlayApps 1.0
import QtQuick 2.0
import VPlay 2.0

App {
  id: app
  // uncomment this to remove the resources on startup, so you can test the downloading again
  //Component.onCompleted: pdfResource.remove()
  NavigationStack {
    Page {
      title: "Download PDF"
      
      Column {
        anchors.centerIn: parent
        
        AppButton {
          text: "Download / Open"
          onClicked: {
            if(pdfResource.available) openPdf()
            else pdfResource.download()
          }
        }
        AppText {
          text: "Status: " + pdfResource.status
        }
      }
    }
  }
  DownloadableResource {
    id: pdfResource
    source: "http://www.orimi.com/pdf-test.pdf"
    storageLocation: FileUtils.DocumentsLocation
    storageName: "pdf-test.pdf"
    extractAsPackage: false
    // if the download is competed, available will be set to true
    onAvailableChanged: if(available) openPdf()
  }
  function openPdf() {
    // you can also open files with nativeUtils.openUrl() now (for paths starting with "file://")
    nativeUtils.openUrl(pdfResource.storagePath)
    // with V-Play 2.17.0 you can also use fileUtils.openFile(), however this is not yet supported by the mobile live scripting apps
    //fileUtils.openFile(pdfResource.storagePath)
  }
}

Two New App Examples How to Integrate C++ with QML

You can check out and copy parts from two brand-new app demos that show how to integrate C++ with QML!

Exposing a C++ Class to QML

The first example shows the different forms of C++ and QML integrations. This example is the tutorial result from How to Expose a Qt C++ Class with Signals and Slots to QML.

Path to the app demo: /Examples/V-Play/appdemos/cpp-qml-integration

Display Data from C++ Models with Qt Charts

The second example shows how to combine a C++ backend that provides the model data for a frontend created in QML. The data is displayed with QML with Qt Charts for both 2D and 3D charts. It also includes shader effects, because, why not?

vplay-cpp-backed-charts-example

Path to the app demo: /Examples/V-Play/appdemos/cpp-backend-charts-qml

Live Client Support for Bluetooth, NFC and Pointer Handlers

The V-Play Live Client now supports the Qt modules for Bluetooth, NFC and Pointer Handlers.

Network Adapter Selection in Live Server

You can now change the used network adapter in the Live Server. This fixes a possible issue that the mobile Live Client stalls in the “Connected – Loading Project” screen. If you also face this issue, here is how to fix it.

Open the settings screen from the lower left corner of your Live Server:

vplay-live-server-settings-icon

Now you can change the selected network adapter:

vplay-live-server-network-adapter-settings

This IP is sent to the Live Client to establish the connection. You can try to select different adapters, the IP of the Live Server and Live Client should be in the same network.

More Features, Improvements and Fixes

How to Update V-Play

Test out these new features by following these steps:

  • Open the V-Play SDK Maintenance Tool in your V-Play SDK directory.
  • Choose “Update components” and finish the update process to get this release as described in the V-Play Update Guide.

V-Play Update in Maintenance Tool

If you haven’t installed V-Play yet, you can do so now with the latest installer from here. Now you can explore all of the new features included in this release!

For a full list of improvements and fixes to V-Play in this update, please check out the change log!

 

 

 

More Posts Like This

 

feature
How to Make Cross-Platform Mobile Apps with Qt – V-Play Apps

vplay-update-2.16.1-live-client-module-live-code-reloading-custom-cpp

Release 2.16.1: Live Code Reloading with Custom C++ and Native Code for Qt

teaser-iphonex-support-and-runtime-screen-orientation-change-705px

Release 2.16.0: iPhone X Support and Runtime Screen Orientation Changes

new-firebase-qt-features-live-code-reloading-qt-5-10-1-vplay-release-2-15-1
Release 2.15.1: New Firebase Features and New Live Code Reloading Apps | Upgrade to Qt 5.10.1 & Qt Creator 4.5.1

The post Release 2.17.0: Firebase Cloud Storage, Downloadable Resources at Runtime and Native File Access on All Platforms appeared first on V-Play Engine.

The Qt Company Blog: Remote UIs with WebGL and WebAssembly

$
0
0

A frequently requested feature by Qt customers is the possibility to access, view and use a Qt-made UI remotely.

However, in contrast to web applications, Qt applications do not offer remote access by nature as communication with the backend usually happens via direct functions call and not over socket-based protocols like HTTP or WebSockets.

But the good thing is, with right system architecture with strong decoupling of frontend and backend and using the functionality of the Qt framework, it is possible to achieve that!

If you want the embedded performance of a Qt application and together with zero installation remote access for your solution, you might consider the following bits of advice and technologies.

Remote access via WebGL Streaming or VNC

When having a headless device or an embedded device with a simple QML-made UI that only needs to be accessed by a small number of users remotely via web browser, WebGL streaming is the right thing for you. In WebGL streaming, the GL commands to render the UI are serialized and sent from the web server to the web browser. The web browser will interpret and render the commands. Here’s how Bosch did it:

On a headless device, you can simply start your application with these command line arguments: -platform webgl.

This enables the WebGL streaming platform plugin. WebGL data is accessible as the app runs locally.

For widget-based applications, you might consider the VNC based functionality, but this requires more bandwidth since the UI is rendered into pixel buffers which are sent over the network.

The drawback of both of the platform plugin approach of VNC and WebGL is, that within one process, you can start your application only either remotely or locally.

If your device has a touchscreen and you still want to have remote access, you need to run at least two processes: One for local and one for remote UI.

The data between both processes is shared via the Qt RemoteObjects library.

Use cases examples are remote training or remote maintenance for technicians in an industrial scenario, where from the browser you can show and control remotely mouse pointer on an embedded HMI device.

Have a look at the previous blog posts:

http://blog.qt.io/blog/2017/02/22/qt-quick-webgl-streaming/

http://blog.qt.io/blog/2017/07/07/qt-webgl-streaming-merged/

http://blog.qt.io/blog/2017/11/14/qt-webgl-cinematic-experience/

 

WebAssembly

WebGL streaming and VNC are rather suited for a limited number of users accessing the UI at the same time.

For example, you might have an application that needs to be accessed by a large number of users simultaneously and does not require to be installed. This could be the case when the application is running as Software-as-a-Service (SaaS) in the cloud. Fortunately, there is another technology that might suit your needs: Qt for WebAssembly.

While WebAssembly itself is not a Remote UI technology like WebGL or VNC, it is an open bytecode format for the web and is standardized by W3C.

Here’s a short video of our sensortag demo running on WebAssembly:

With Qt for WebAssembly, we are able to cross-compile Qt applications into the WebAssembly bytecode. The generated WASM files can be served from any web server and run in any modern web browser. For remote access and distributed applications, a separate data channel needs to be opened to the device. Here it needs to considered that a WebAssembly application runs in a sandbox. Thus, the only way to communicate with the web server is via HTTP requests or web sockets.

However, this means that in terms of web server communication, Qt applications then behave exactly like web applications!
Of course, you could still compile and deploy the application directly into another platform-specific format with Qt and use Qt Remote Objects for client-server communication. But only with Qt for WebAssembly, the zero-installation feature and sandboxing come for free 😉

Exemplary use case scenarios are SaaS applications deployed in the cloud and running in the browser, multi-terminal UIs, UIs for gateways or headless devices without installation.

Have a look at our previous blog posts on Qt for WebAssembly:
https://blog.qt.io/blog/2018/04/23/beta-qt-webassembly-technology-preview/
https://blog.qt.io/blog/2018/05/22/qt-for-webassembly/

That’s it for today! Please check back soon for our final installment of our automation mini-blog-series, where we will look at cloud integration. In the meantime, have a look at our website for more information, or at Lars’ blog post for an overview of our blog series on Qt for Automation 2.0.

The post Remote UIs with WebGL and WebAssembly appeared first on Qt Blog.

Viewing all 15410 articles
Browse latest View live