Open-source IoT Platform

Device management, data collection, processing and visualization
for your IoT projects
Learn about Thingsboard

Tuesday, December 29, 2020

Storing ThingsBoard time series data with Apache Cassandra: improved efficiency

Great news for all ThingsBoard users who store time series data in the Cassandra database!

We are happy to announce a long-awaited improvement to the Cassandra time series DAO that caches the storage of partitions for telemetry keys and ultimately speeds up the storage of time series data by up to two times. 


Let's take a look at this improvement.


So, what have we done?

Prior to ThingsBoard version 3.2.1, saving a single time series data point caused two insert requests to Cassandra. One of the actual value and one for the partition. Saving partitions is necessary to optimize some aggregation queries. This is how ThingsBoard “knows” that there is some data for a specific device for a particular timeframe. 


In other words, while saving for instance 25 different data points for a certain entity such as a device, we performed 50 requests to the database:

25 requests to save time series data records (which consist of UUID, telemetry key, telemetry value, timestamp) and 25 requests to save partition records (UUID, telemetry key, and a partition timestamp calculated from the timestamp in an appropriate request).

Assuming all 25 data points belong to the same partition (e.g same month if TS_KV_PARTITIONING parameter is set to MONTHS), it appears that we inserted 25 duplicate partition records.

This definitely required an improvement. 


Starting with ThingsBoard 3.2.1, we introduce partition cache to avoid saving duplicate partition records. 


Therefore, the following parameter TS_KV_PARTITIONS_MAX_CACHE_SIZE has been added with a default value of 100000, which is responsible for the maximum number of partition keys that can be cached.


How to configure it?


In order to optimize the cache hit rate, you should specify the correct value of the TS_KV_PARTITIONS_MAX_CACHE_SIZE parameter.

The partition cache consumes approximately 300 bytes (depends on the average size of the data point key) for each cache record. Caching 1 million partitions will require approximately 300 MB of RAM.

Let's take a look at the specific example for a better understanding of what value of this parameter should be set depending on your case:

Suppose we use TS_KV_PARTITIONING = MONTHS.

Let's assume that we have 100,000 devices of the same type, each of which sends 2 data points to the system for storage every second. So, 200,000 requests to save time series data records to the database every second and, as a result, another 200,000 requests to save section records to the database every second, which means that without using the cache in previous versions of ThingsBoard we would do ~ 400,000 requests/sec.
Based on this result, the default value TS_KV_PARTITIONS_MAX_CACHE_SIZE = 100000 for partition caching wasn’t enough for all partitions to be saved and cached. With a default value, only 100,000 partitions would be cached and the cache hit rate would be close to 0.5. Half of the values would be pushed out of the cache each time requests are executed to save time series data records.
That is, by changing the default value to TS_KV_PARTITIONS_MAX_CACHE_SIZE = 100000 (devices) * 2 (unique data point keys) + 10000 (reserve for saving statistics, etc.), we will avoid storing the duplicated partition records. 
Therefore, the size of the cache should be directly proportional to (number of data points * number of entities that will send these data points) + reserve.

The partition cache consumes approximately 300 bytes (depends on the average size of the data point key) for each cache record. Caching 1 million partitions will require approximately 300 MB of RAM.

Wednesday, September 23, 2020

It's time to listen! ESYS use cases with ThingsBoard

ThingsBoard is launching a series of customer experience publications. For your convenience, we will tag the stories with #ThingsBoardInAction. Recently we'd already came up with case studies of our customers and now we want to make these printings on a regular basis.

The very first #ThingsBoardInAction story came from ESYS GmbH. The company has a large number of customers from various industries. Hereunder we present some of the solutions ESYS has implemented with Thingsboard. Initially, the company started with Community version and then migrated to the Pro version of the platform.

#bthvn2020

"In the course of the Beethoven Year 2020, the Berlin State Library has exhibited and made available to the public its substantial collection of Ludwig van Beethoven's music manuscripts.

ESYS GmbH ensured the climatic monitoring of the sensitive exhibits and made them accessible to the responsible persons via Thingsboard Dashboard", — says Dr. Robby Rochlitzer, Managing Director Research and Development at ESYS.



Agriculture


For the agricultural industry ESYS team has developed a product that uses Modbus sensors for soil moisture and climate monitoring (temperature, relative humidity, wind direction, wind speed, rainfall) to optimize irrigation and fertilization and allows for evaluation using a Thingsboard.


Monitoring Industry 4.0


According to Dr. Rochlitzer, "In cooperation with an industrial company, a system was developed that records and monitors motor temperatures, motor current consumption, the temperatures of two motor bearings, vibrations and control cabinet temperatures in kilns".


Smart poultry production

For the purpose of monitoring of breeding houses and adjusting the quality of life index, the developed system delivered live images and ability to evaluate them via Thingsboard.


To all the customers we have and will have in the future: #ThingsBoardInAction is a good opportunity to tell the world about your company and your solutions. We are keen to support your IoT market expansion with our products and this additional option.


Wednesday, September 16, 2020

Deleting timeseries of deleted ThingsBoard entities

    As of ThingsBoard v.2.5.4/3.1.1, telemetry and attributes of entities are not automatically deleted when entities themselves are deleted, which causes performance degradation (because of decreasing query speed) and filling up of the disk space . Lower are queries for the PostgreSQL database to delete telemetry and attributes.

    Deleting timeseries of previously deleted entities:


 DELETE * FROM ts_kv WHERE entity_id NOT IN (SELECT id FROM device) AND entity_id NOT IN (SELECT id FROM asset) AND entity_id NOT IN (SELECT id FROM customer) AND entity_id NOT IN (SELECT id FROM tenant) AND entity_id NOT IN (SELECT id FROM entity_group) AND entity_id NOT IN (SELECT id FROM integration) AND entity_id NOT IN (SELECT id FROM converter) AND entity_id NOT IN (SELECT id FROM entity_view) AND entity_id NOT IN (SELECT id FROM tb_user) AND entity_id NOT IN (SELECT id FROM dashboard) AND entity_id NOT IN (SELECT id FROM rule_chain) AND entity_id NOT IN (SELECT id FROM role); 

    Deleting attributes of previously deleted entities:


 DELETE * FROM attribute_kv WHERE entity_id NOT IN (SELECT id FROM device) AND entity_id NOT IN (SELECT id FROM asset) AND entity_id NOT IN (SELECT id FROM customer) AND entity_id NOT IN (SELECT id FROM tenant) AND entity_id NOT IN (SELECT id FROM entity_group) AND entity_id NOT IN (SELECT id FROM integration) AND entity_id NOT IN (SELECT id FROM converter) AND entity_id NOT IN (SELECT id FROM entity_view) AND entity_id NOT IN (SELECT id FROM tb_user) AND entity_id NOT IN (SELECT id FROM dashboard) AND entity_id NOT IN (SELECT id FROM rule_chain) AND entity_id NOT IN (SELECT id FROM role); 

    However, in PostgreSQL deleted rows are not physically removed from the table, they remain present until the VACUUM command is done, as described here: https://www.postgresql.org/docs/11/sql-vacuum.html. In order to reclaim the disk space VACUUM commands with ANALYZE parameter (for statistics refreshing) needs to be executed:


 VACUUM ANALYZE attribute_kv; 

    And if reclaiming space is critical, the parameter FULL should be added (please note that it requires as much free disk space as the table currently takes up):


 VACUUM FULL ANALYZE attribute_kv; 

    In order to find the list of telemetry tables to be VACUUM’ed, the following command needs to be executed:

SELECT table_name FROM information_schema.tables WHERE table_schema = 'public' AND table_name like 'ts_kv%' ORDER BY table_name; 

    Then the resulting list of tables is put in the VACUUM ANALYZE command:

    In this example the command looks like this:

VACUUM FULL ANALYZE ts_kv, ts_kv_2004_01, ts_kv_2020_08, ts_kv_2020_09, ts_kv_dictionary, ts_kv_indefinite, ts_kv_latest; 

    Please be aware that:

1.You need to put the list of tables from the output of the previous SELECT query, not just copy the previous VACUUM command.

2.Tables are not available when you perform a VACUUM command (all incoming read and write queries are put in a queue), so the best solution would be to perform it in the (at least relative) downtime.



Tuesday, January 14, 2020

Another way to celebrate 5k!

We know the geeks' network is about business, not-for-all humor and the public does not put hands together for postings of this kind (luckily, not about cats). But...
U R simply the best.
5000 stars on GitHub