Alternative MQTT brokers#






  • Tests: Bulk upload via CSV.

  • Tests: Time filtering and more for HTTP API export.

  • New: Bulk upload via InfluxDB line protocol.

  • Add support for Homie 2.x:

    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$homie 2.0.1
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$name hive-teststand
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$mac 84:F3:EB:B2:FB:1A
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$nodes temperature0,temperature1,temperature2,temperature3,weight,battery,data
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$stats/interval 0
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$stats/signal 62
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$stats/uptime 66
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$fw/name node-wifi-mqtt-homie
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$fw/version 0.10.0
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$fw/checksum 98816d2088bead73b35c38a4db28acb4
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$implementation esp8266
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$implementation/config {"wifi":{"ssid":"Ponyhof"},"mqtt":{"host":"","port":1883,"base_topic":"hiveeyes/testdrive/area-42/node-1/message-json","auth":false},"name":"hive-teststand","ota":{"enabled":true},"device_id":"hive-teststand","settings":{"sendInterval":60,"weightOffset":33840,"kilogramDivider":20.85,"vccAdjust":0,"tempsensorsAmount":4}}
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$implementation/version 2.0.0
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$implementation/ota/enabled true
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/temperature0/$type temperature
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/temperature0/$properties unit,degrees
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/temperature0/unit C
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/temperature1/$type temperature
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/temperature1/$properties unit,degrees
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/temperature1/unit C
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/temperature2/$type temperature
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/temperature2/$properties unit,degrees
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/temperature3/$type temperature
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/temperature3/$properties unit,degrees
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/weight/$type weight
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/weight/$properties unit,kilogram
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/battery/$type battery
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/battery/$properties unit,volt
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/battery/volt 3.02
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/data/$type __json__
    hiveeyes/testdrive/area-42/node-1/message-jsonhive-teststand/$online true
  • Support batch readings (jsonl?)

  • Investigate switching to Klein - -








Using server-sent-events (SSE) from Particle Photon and Electron - -

QuantumBlack Matplotlib styles -

Graphing energy usage in Grafana - -

Energy monitoring - - - - - - -

Home Assistant DSMR & Utility Meter -

Decoder for Dutch Smart Meter Requirements (DSMR) telegram data - - -


Tasmota Sensors#


Tasmota & Grafana#

Tasmota Timestamps#

Noch mehr Tasmota. Wir haben ja das Zeitstempeln noch nicht implementiert. An den Beispielpayloads habe ich gesehen, dass die Zeit ohne Zeitzone gesendet wird, z.B. “2019-06-02T22:13:07”. Bei [1,2,3] geht es darum, wie/ob man (optional) die Zeitzone an den ISO-8601 Zeitstempel anhängen kann.

[1] [2] [3]

Sonoff Pow#

OpenHab configuration#

Number Dust_Sensor_2_5 "PM 2.5 [%.2f µg/m³]" <door> (Dust) {mqtt="<[mosquitto:tele/dust/SENSOR:state:JSONPATH($.SDS0X1['PM2.5'])]"}
Number Dust_Sensor_10  "PM 10 [%.2f µg/m³]" <door> (Dust) {mqtt="<[mosquitto:tele/dust/SENSOR:state:JSONPATH($.SDS0X1['PM10'])]"}


gpslogger - -


Other projects - -


Embedding Grafana#



  • Build Debian packages for arm64v8

  • Build packages for mipsel/libmusl for targeting OpenWRT?
  • Is it possible to run Kotori inside a Docker container under systemd? See also:

  • Improve kotori-selftest

  • Parse .deb package file name from last line of fpm output:

    {:timestamp=>"2019-03-04T01:08:46.573729+0000", :message=>"Created package", :path=>"./dist/kotori_0.22.2-1_amd64.deb"}
  • Improve documentation about release, build, package and publish.

  • [x] Fix problems after switching to Twisted 18.9.0
    • HTTP handler stopped working

    • 0:

      UserWarning: You do not have a working installation of the service_identity module: ‘cannot import name opentype’. Please install it from <> and make sure all of its dependencies are satisfied. Without the service_identity module, Twisted can perform only rudimentary TLS client hostname verification. Many valid certificate/hostname mappings may be rejected.

  • [o] Run git checkout ${VERSION} after cloning

  • [o] Use fpm option for defining systemd files

  • [o] Make make package-debian run on top of the most recent version if version is not specified

  • [o] Revert apt install docs to previous variant re. recommends && suggests

  • [o] RUN $pip install kotori[${FEATURES}]==${VERSION} --find-links=./dist --upgrade

    does not use the local sdist, but acquires the egg from PyPI.

  • [o]

    root@elbanco:~# /opt/kotori/bin/kotori --version
    :0: UserWarning: You do not have a working installation of the service_identity module: 'cannot import name opentype'.  Please install it from <> and make sure all of its dependencies are satisfied.  Without the service_identity module, Twisted can perform only rudimentary TLS client hostname verification.  Many valid certificate/hostname mappings may be rejected.
    Kotori version 0.22.5
  • [o] Publish Kotori to apt repository after uploading to “incoming” directory:

    offgrid:~ amo$ ssh
    workbench@pulp:~$ ./aptly_publish_kotori
  • [o] Strip larger dependencies from Kotori site-packages

    • 28M bokeh

    • 8.5M ggplot

    • 61M pandas

    • 8.1M pip

    • 27M statsmodels

    • 17M twisted


New Python packages for InfluxDB - - - - - - - - - -



hiveeyes/testdrive/sandsjo/data.json {"weight":33.33,"temperature1":42.42,"humidity1":84.84,"battery_level":50}
hiveeyes/testdrive/sandsjo/error.json {
    "timestamp": "2018-10-13T18:51:10+00:00",
    "message": "'NoneType' object has no attribute 'startswith'",
    "type": "<type 'exceptions.AttributeError'>",
    "description": "Error processing MQTT message \"{\"weight\":33.33,\"temperature1\":42.42,\"humidity1\":84.84,\"battery_level\":50}\" from topic \"hiveeyes/testdrive/sandsjo/data.json\"."



  • [o] Wrong error channel:

    hiveeyes/kh/cfb/hive1/measure/airhumidity_outside (null)
        "timestamp": "2018-04-10T12:09:18+00:00",
        "message": "could not convert string to float: ",
        "type": "<type 'exceptions.ValueError'>",
        "description": "Error processing MQTT message \"\" from topic \"hiveeyes/kh/cfb/hive1/measure/airhumidity_outside\"."


  • [o] Hiveeyes/Grafana

    • [o] When provisioning with new per-node Grafana panel, check whether datasource “WETTERDATEN” and/or “sunmoon” already exists. Otherwise, create them on demand?

    • [o] Optimize regex performance for vendor.hiveeyes.application.BeekeeperFields

    • [o] Check Grafana Instant Dashboard for

    • [o] Capability to update the new per-node instant dashboard with fields arriving from discrete telemetry data submissions

  • [o] Warning when building a release: “Dependency Links processing has been deprecated with an accelerated time schedule and will be removed in pip 1.6”

  • [o] After building on oasis: rm dist/*.deb

  • [o] Release fresh package for arm

  • [o] Does the Grafana refresh interval tamer really run and work properly?

  • [o] Improve wording of MQTT error signalling:

    hiveeyes/kh/cfb/hive1/measure/airhumidity_outside/error.json {
        "timestamp": "2018-04-09T05:56:42+00:00",
        "message": "could not convert string to float: ",
        "type": "<type 'exceptions.ValueError'>",
        "description": "Error processing MQTT message \"\" from topic \"hiveeyes/kh/cfb/hive1/measure/airhumidity_outside\"."


  • [o] Add inline implementation for a functionality based on WAMP.

    While being at it, easily make it channel-based by just consuming the per-channel export=>json endpoint as a data feed.

  • [o] Grafana: Use human readable title again after fully upgrading to new Grafana API (i.e. don’t use get-by-slug anymore!)


  • [o] Grafana Annotations says “title” field is deprecated. Investigate and remedy eventual issues.

  • [o] Write Issue @ Grafana re. stable addressing of panels and the 40 character limit on uid’s vs. len(uuid -v4) == 36 already


  • [x] Refactoring of kotori.daq.graphing.grafana

  • [o] Governance model: Map Kotori realms to Grafana organizations - finally?




  • [x] Let “luftdatenpumpe” report about its cache location

  • [o] Reduce default refresh time for Grafana panels.

    But how? What we really want is to keep the instant-on effect for new users but gradually tame the refresh interval down to “Each 5 minutes”. At best, this would be done by checking the most recent modification timestamp against a configured threshold.

  • [o] Don’t pipe Luftdaten through the whole Kotori incl. MQTT bus, but:
    • Write it directly to InfluxDB, but using the appropriate core methods from Kotori.

    • Extract this piece of code into an addon namespace to make it available to both Kotori channels and standalone applications.

    • Maybe use golang?

  • [o] Update Grafana dashboards in vendor/luftdaten/application


  • [x] Put automatically generated dashboards into specific folder “Instant dashboards”


  • Supply logrotate configuration:

    cat /etc/logrotate.d/kotori
    /var/log/kotori/kotori.log {
        su kotori kotori
        rotate 52
  • Re. “systemctl reload kotori”: Prepare Kotori for HUP signals for restarting the logging subsystem





  • [o] Enable Grafana feature “Panel Options » Graph Tooltip » Shared Crosshair” by default

  • [o] Make Dashboard per-channel, not just per-network

  • [o] Upload very large CSV files as .zip or .gz

  • [o] In overload situations:

    2017-08-09T06:13:13+0200 [            ] ERROR: Error processing MQTT message from topic "hiveeyes/testdrive-hivetool/test/15/data.json": [Failure instance: Traceback: <class 'influxdb.exceptions.InfluxDBServerError'>: {"error":"timeout"}
    2017-08-09 06:18:21,983 [requests.packages.urllib3.connectionpool] WARNING: Connection pool is full, discarding connection: localhost
  • [o] Grafana: Display » Stacking & Null value » Null value: null

  • [o] Update link to Grafana Dashboard:

  • Have a look at







Apr 24 15:03:51 elbanco influxd[5910]: [httpd] - admin [24/Apr/2017:15:03:48 +0200] "POST /write?db=luftdaten_testdrive&precision=n HTTP/1.1" 204 0 "-" "python-requests/2.13.0" 74cb40c9-28ee-11e7-8db0-000000000000 3337074
Apr 24 15:03:51 elbanco influxd[5910]: [I] 2017-04-24T13:03:51Z Snapshot for path /var/lib/influxdb/data/_internal/monitor/1703 written in 873.725945ms engine=tsm1
Apr 24 15:03:52 elbanco influxd[5910]: [I] 2017-04-24T13:03:52Z beginning level 1 compaction of group 0, 2 TSM files engine=tsm1
Apr 24 15:03:52 elbanco influxd[5910]: [I] 2017-04-24T13:03:52Z compacting level 1 group (0) /var/lib/influxdb/data/_internal/monitor/1703/000000069-000000001.tsm (#0) engine=tsm1
Apr 24 15:03:52 elbanco influxd[5910]: [I] 2017-04-24T13:03:52Z compacting level 1 group (0) /var/lib/influxdb/data/_internal/monitor/1703/000000070-000000001.tsm (#1) engine=tsm1

Apr 24 15:03:53 elbanco influxd[5910]: [I] 2017-04-24T13:03:53Z compacted level 1 group (0) into /var/lib/influxdb/data/_internal/monitor/1703/000000070-000000002.tsm.tmp (#0) engine=tsm1
Apr 24 15:03:53 elbanco influxd[5910]: [I] 2017-04-24T13:03:53Z compacted level 1 2 files into 1 files in 1.479397792s engine=tsm1



  • [x] Error:

    http: error: OSError: [Errno 63] File name too long: 'hiveeyes_node-wifi-mqtt_esp-esp8266_3bb31b2c-MEASUREMENT_INTERVAL=60 * 1000,SENSOR_HX711=true,HE_SITE=area-42,WIFI_SSID_1=the-beekeepers,LOADCELL_ZERO_OFFSET=53623.0f,DEEPSLEEP_ENABLED=true,LOADCELL_KG_DIVIDER=18053,WIFI_PASS_1=secret,HE_HIVE=node-1,SENSOR_DHTxx=false,HE_USER=testdrive.bin'
  • [o] Store the firmware permanently and offer a marketplace-style portal around it displaying the build details etc.

  • [o] Improve HTTP router: Don’t respond with “HTTP/1.1 405 Method Not Allowed” in case of 404s!

  • [o] Interpolate build host information (OS, Compiler versions, etc.) into artefact information



  • Enable CSV data acquisition over MQTT

  • Multistage HTTP output for firmware builder process







  • When sending a field like “vcc”, the name is not reflected in the panel title appropriately as “vcc @ device=x, site=y”


Annotations - “delete event”:

> use hiveeyes_43a88fd9_9eea_4102_90da_7bac748c742d
> show measurements
> select * from muc_mh_b99_1_events
> delete from muc_mh_b99_1_events where time=1488643668000000000





  • When importing a large CSV file (6MB), parallel imports of other resources are not possible. Why is that? Hint: Thread-pool exhaustion at

  • Problem:

    2017-02-06T02:49:04+0100 [          ] CRITICAL: Could not format chunk or write data (ex=Unknown string format): data={u'Gewicht1': u'Gewicht1', u'Gewichtsabw2': u'Gewichtsabw2', u'Gewicht3': u'Gewicht3', u'Gewichtsabw1': u'Gewichtsabw1', u'Gewicht4': u'Gewicht4', u'Gewichtsabw4': u'Gewichtsabw4', u'Gewichtsabw3': u'Gewichtsabw3', u'Gesamtgewicht': u'Gesamtgewicht', u'Temp3': u'Temp3', u'Temp2': u'Temp2', u'Temp1': u'Temp1', u'Gewicht2': u'Gewicht2', u'Gewichtabw': u'Gewichtabw'}, meta={'node': 'node-001', 'slot': 'data.json', 'realm': 'hiveeyes', 'network': 'testdrive-mh', 'database': 'hiveeyes_testdrive_mh', 'measurement_events': 'muenchen_node_001_events', 'measurement': 'muenchen_node_001_sensors', 'gateway': 'muenchen'}


  • Data export: How to sort fields?

  • How to handle CSV import errors like…?:

    influxdb.exceptions.InfluxDBClientError: 400: {"error":"unable to parse 'berlin_node_002_sensors  1474570757000000000': invalid field format"}
    influxdb.exceptions.InfluxDBClientError: 400: {"error":"unable to parse 'berlin_node_002_sensors  1474570815000000000': invalid field format"}
    influxdb.exceptions.InfluxDBClientError: 400: {"error":"unable to parse 'berlin_node_002_sensors  1474570873000000000': invalid field format"}
    influxdb.exceptions.InfluxDBClientError: 400: {"error":"unable to parse 'berlin_node_002_sensors  1474570931000000000': invalid field format"}
    influxdb.exceptions.InfluxDBClientError: 400: {"error":"unable to parse 'berlin_node_002_sensors  1474570989000000000': invalid field format"}

    See also


  • No appropriate stacktrace when using wrong transformer, e.g.:

    transform       = kotori.daq.strategy.foobar:FooBarStrategy.topology_to_storage,



  • Make default panels 400px high and put the legend on the right side

  • Mitigate race condition on Grafana Dashboard creation when importing CSV file

  • Add documentation about annotations

  • New Grafana issue re. empty tags wrt. annotations

  • Sometimes, single MQTT message are not received/processed. Why?


Watch Grafana issues#

Look at Grafana features#





For measuring fine dust particulates, the Berlin chapter of Freifunk is considering using our infrastructure, see also




  • Generic registration call to explicitely announce field types, e.g. wrt. CSV format:

    ## weight(float), temperature(float), humidity(float)

    • How to do this with JSON payloads over MQTT or similar?

  • Provide recent package (with CSV feature) and supply MongoDB from repository


  • HTTP: Let user announce timezone per channel

  • HTTP: Get current list of header names

  • HTTP: Announce value units

  • Make MongoDB address configurable

  • Virtual tara


  • Add reading to panel on a per-field level

  • Improve acquisition documentation

  • CSV Bulk acquisition, re. timestamp

  • FTP CSV acquisition

  • Geohash

  • CSV: What about quotes (“)?

  • “influxdb.exceptions.InfluxDBClientError: 400: write failed: field type conflict: input field “time” on measurement “area_42_node_1” is type string, already exists as type float”

  • How to signal errors occurring in the data acquisition chain?

  • Use dateutil.parse immediately on HTTP ingress, so UTC will be republished to MQTT

  • Remember whether InfluxDB database already was created to prevent hammering

  • Hiveeyes / Open Hive CSV:
    • Spannung auf separatem Panel

  • How to provide the user with access to the log file? e.g. for error messages like influxdb.exceptions.InfluxDBClientError: 400: {"error":"unable to parse 'area_42_node_7  1474570699000000000': invalid field format"}

  • How to run the metrics LoopingCall to actually work when the system is under load?








  • Grafana dashboard creation: NODE=HUZZAH,GW=DACH,NET=KH. Improve when sending from a different node: node=feather,gw=wormcompost.



  • Improve situation of the Python/MQTT client. Describe how to add timestamp and geohash to provide temporal and spatial information from sensor nodes.

  • First experiments with



  • Introduce ZeroMQ data acquisition




select * from events; name: events ———— time id tags text title 1470661200000000000 482a38ce-791e-11e6-b152-7cd1c55000be these are the tags <a href=>Release notes</a> Deployed v10.2.0


MongoDB on ARM#


BMBF „Open Photonik“



  • Docs: Add “contact” page

  • Docs: Add “How to configure (and secure) Nginx” or how to bind HTTP port to *:24642.





  • [x] Add export format “.tsv”

  • [o] Improve resiliency when InfluxDB or Grafana is down

  • [o] Disable on production

  • [o] Vendor Hiveeyes: Integrate for Stockkarte



  • [o] Document export parameters “exclude”, “include”, “interpolate” and “sorted”





  • [x] Problem with simplejson after installing

When building for the first time:

B /home/workbench/isarengineering/kotori/build/kotori/lib/python2.7/site-packages/cornice/scaffolds/__init__.pyc
Traceback (most recent call last):
  File "/home/workbench/isarengineering/kotori/build/kotori/bin/virtualenv-tools", line 9, in <module>
    load_entry_point('virtualenv-tools==1.0', 'console_scripts', 'virtualenv-tools')()
  File "/home/workbench/isarengineering/kotori/build/kotori/lib/python2.7/site-packages/", line 258, in main
    if not update_paths(path, options.update_path):
  File "/home/workbench/isarengineering/kotori/build/kotori/lib/python2.7/site-packages/", line 187, in update_paths
    update_pycs(lib_dir, new_path, lib_name)
  File "/home/workbench/isarengineering/kotori/build/kotori/lib/python2.7/site-packages/", line 140, in update_pycs
    update_pyc(filename, local_path)
  File "/home/workbench/isarengineering/kotori/build/kotori/lib/python2.7/site-packages/", line 96, in update_pyc
    code = marshal.load(f)
ValueError: bad marshal data (unknown type code)


  • [x] Fix numpy runtime dependency on atlas, PyTables runtime dependency on HDF5 and more:

    exceptions.ImportError: Missing required dependencies ['numpy']
    ImportError: cannot open shared object file: No such file or directory
    ImportError: HDFStore requires PyTables, " cannot open shared object file: No such file or directory" problem importing

    aptitude install -y libatlas-base-dev libopenblas-base liblapack3 libhdf5-8 libnetcdfc7 liblzo2-2 libbz2-1.0 aptitude install -y libpng12-0 libfreetype6 python-cairocffi

  • [x] Add .lower() conversion to WanBusStrategy.sanitize_db_identifier

  • [x] Add quotes to series name when querying InfluxDB series starting with numeric value, e.g. 3756782252718325761_1

  • [x] Add “exclude” parameter for mitigating scaling/outlier issue with “wght1”, e.g.

  • [x] Fix exceptions.Exception: Excel worksheet name ‘25a0e5df_9517_405b_ab14_cb5b514ac9e8_3756782252718325761_1’ must be <= 31 chars.

  • [x] Check if build dependencies can be announced to fpm

  • [o] Investigate void rendering with:

  • [o] Build packages for armhf

  • [o] cairo: no [cairocffi or pycairo not found]

  • [o] PyTables: Could not find blosc headers and library; using internal sources.

  • [o] Add logrotate configuration

  • [o] Dependency Links processing has been deprecated with an accelerated time schedule and will be removed in pip 1.6

  • [o] matplotlib has 50MB on its own, can’t we just depend on python-matplotlib? (and python-numpy, and python-pandas?)

  • [o] make -j4 when building package



  • [o] Add Javascript and Arduino clients (using HTTP+JSON)

  • [o] Package building: /COPYRIGHT and /LICENSE get introduced from crossbar




  • Protocol adapters

    • http-to-mqtt

    • mqtt-to-wamp

    • udp-to-mqtt

  • Data querying and export

    • http-to-influxdb

  • Clients (library and examples: demo, sawtooth)

    • Bash

    • Python

    • PHP

  • Uniform example program interfaces:

    kotori-client.(sh|py|php) transmit demo
    kotori-client.(sh|py|php) transmit sawtooth
    kotori-client.(sh|py|php) fetch demo
  • Documentation!

  • Clients (universal CLI) => Terkin?

; Todo: Add predicate for verifying the payload actually is in JSON format, e.g.:: ; ; source_predicate = body.format:json ; ; Or by directly defining Python code as validator, e.g.:: ; ; source_predicate = json.loads(body) ;


  • [o] Recursively read configuration files

  • [o] Generalize metrics subsystem and add to different applications


  • [o] Notify jpmens about handbook/acquisition/sawtooth.html#multiple-sawtooth-signals, also write something about “jo” at

  • [o] Connect from an ESP8266:

  • [o] Also have a look at:

  • [o] Build and document Kotori CoAP interface based on txThings

  • [o] Q: Uncaught SecurityError: Failed to read the 'localStorage' property from 'Window': Access is denied for this document.

  • [o] Disable Grafana refreshing for static graphs w/o live data on pages like Single sawtooth signal.


  • [o] Add proper content attributions to media elements from 3rd-party authors

  • [o] Display license in documentation

  • [o] By default (mqttkit), prefix dashboard names with realm, to avoid collisions like with hiveeyes.

  • [o] Default zoom level for new Grafana dashboards should be “Last 5 minutes” or even shorter


  • [o] Hiveeyes needs a different activity indicator in log file due its low transmission rate. Introduce total packet counter.

  • [o] Slogan: Data acquisition without friction.





  • [o] Add Grafana graphs to applications/hiveeyes.rst




  • [o] Improve MQTTKit documentation

  • [o] ==> /var/log/kotori/kotori.log <==

    2016-05-13T18:50:51+0200 [kotori.daq.graphing.grafana ] WARN: Unable to format event {‘log_namespace’: ‘kotori.daq.graphing.grafana’, ‘log_level’: <LogLevel=warn>, ‘log_logger’: <Logger ‘kotori.daq.graphing.grafana’>, ‘log_time’: 1463158251.727876, ‘log_source’: None, ‘log_format’: u’Client Error 404: {“message”:”Dashboard not found”}’}: u‘“message”’

  • [o] kwargs={‘userdata’: {‘foo’: ‘bar’}

  • [o] Convenience alias for “utc”


  • [o] Issues after installing Kotori-0.5.0 from Debian package

    • tabulate,pyclibrary,sympy muß noch via pip in venv.
      => pip install kotori[daq_binary]
    • die dateien fehlen: /opt/kotori/lib/python2.7/site-packages/kotori/daq/graphing/resources/grafana-dashboard.json …
      ERROR: IOError: [Errno 2] No such file or directory: ‘/opt/kotori/lib/python2.7/site-packages/kotori/frontend/development.ini’
    • [04.05.16 00:23:33] Janosch: ahh ja nochwas. irgendwie hat er die rechte nicht richtig gesetzt auf /opt/kotori.
      er hat 1000 genommen obwohl er 1001 ist.
      evtl aber auch erst nachdem ich ein dpkg -p kotori und nochmal ein dpkg -i ….
      [04.05.16 00:23:54] Janosch: muß man auch nochmal überprüfen
    • Disable Grafana completely or reduce error logging: In a situation where the credentials do not match,
      this would otherwise (currently) cause an exception storm of Grafana communication failures.
  • [o] InfluxDB 0.12.0 requires Grafana 3:

  • [o] Docs about how to

    • send telemetry data to generic MQTTKit application (mosquitto_pub, Python)

    • store InfluxDB payloads to database

  • [o] Configuration setting to disable Grafana completely

  • [o] Vendor “LST”

    • Add metrics

    • Don’t use the WAMP bus for achieving higher performance (disable optionally)

  • [o] Put files from etc/apps-available to etc/examples?





  • RF69, RF95, RF212





  • [o] Get rid of “CREATE DATABASE” calls for each and every measurement

  • [o] Improve InfluxDB connection resiliency if database is down on initial connect


  • [o] LST:
    • Refactor components
      • UDPReceiver and UdpBusForwarder => UdpBusForwarder and UdpBusPublisher

      • WampApplication => WampBus

    • Automatically publish messages to the MQTT bus using composition of generic components

  • [o] Refactor MqttInfluxGrafanaService and BusInfluxForwarder into

    new generic component and reuse at Hiveeyes/mqttkit and LST

  • [o] Throughput metrics for vendor LST

  • [o] Configuration and packaging for 0.7.0

  • [o] Pyramid should go to kotori.web with frontend on port 4000

  • [o] Introduce boot_frontend as kotori.web.mount(app=app, port=4000),

    where “app” might be “kotori.frontend:file://development.ini:main”

  • [o] Inject kotori settings into options (to be used as global_conf)



  • [o] Introduce kosh, the Kotori Shell

    • kosh show channels

    • kosh show subchannels

    • kosh show services

  • [o] Make interval of periodic rate display configurable:

    2016-04-03T04:47:09+0200 [            ] INFO: [hiveeyes] measurements: 0.00 Hz, transactions: 0.00 tps
    2016-04-03T04:47:09+0200 [            ] INFO: [mqttkit] transactions: 0.00 tps
  • [o] Fix threading bug when having multiple MQTT subscribers:

    2016-04-03T04:50:13+0200 [mqtt                               ] ERROR: Unexpected CONNACK packet received in None
    Also, the TwistedMqttAdapter “mqtt-mqttkit” somehow seems to take over the existing
    MQTT session of TwistedMqttAdapter “mqtt-hiveeyes” and receives all its messages. WTF!
    => Try to migrate to paho-mqtt, in a multithreaded setup on top of client.loop_forever() for convenience.
  • [o] Use TLS for MQTT connections

  • [o] Improve log format: Put Python module namespace at the end of the line

  • [o] Use timestamp from Paho if not supplied via data message

  • [o] Add measurement count to INFO: [hiveeyes ] measurements: 4.96 Hz, transactions: 5.00 tps

  • [o] Start dogfeeding by subscribing to $SYS/#

  • [o] The realm does not get incorporated into the name of the Grafana dashboard:

    2016-04-03T22:26:09+0200 [kotori.daq.graphing.grafana        ] INFO: Provisioning Grafana for database "hiveeyes_3733a169_70d2_450b_b717_6f002a13716b" and series "tug22_999". dashboard=3733a169-70d2-450b-b717-6f002a13716b
    2016-04-03T22:26:09+0200 [kotori.daq.graphing.grafana        ] INFO: Creating datasource "hiveeyes_3733a169_70d2_450b_b717_6f002a13716b"
    2016-04-03T22:26:09+0200 [kotori.daq.graphing.grafana        ] INFO: Getting dashboard "3733a169-70d2-450b-b717-6f002a13716b"
    2016-04-03T22:26:09+0200 [kotori.daq.graphing.grafana        ] INFO: No missing panels to add


maybe go to:



  • [o] Document some performance data:

    • MQTT and InfluxDB

      • Python 2.7

        • measurements: 1000-1300 Hz

        • transactions: 50-70 tps (30-40 tps when debugging)

      • PyPy 5.0

        • measurements: 2000-3000 Hz
        • transactions: 50-70 tps when ramping up, then goes down to 5-15 tps :-(
          Q: What’s the reason?
          A: Probably because we don’t have a thread pool on the storage adapter side yet
          and the number of parallel requests leads to contention on the Twisted side.
  • [o] MqttWampBridge

  • [o] InfluxDB, MQTT- and Grafana connection and operation robustness/resiliency

  • [o] Run “CREATE DATABASE only once”

  • [o] Proper debug level control

  • [o] Use StorageAdapter from vendor “lst” also at “hiveeyes”

  • [o] Use ThreadPool for storage operations

  • [o] Deprecate InfluxDB 0.8 compatibility

  • [o] MQTT broker connection resiliency

  • [o] Start mqttkit, then mention in README.rst at “For developers”

  • [o] Improve application bootstrapping by refactoring into a Twisted plugin

  • [o] REST API

  • [o] Throttle metrics output to one per minute after 90 seconds

  • [o] Assure communication between Kotori and InfluxDB is efficient (UDP, anyone?)

  • [o] Mechanisms for resetting database and dashboard

  • [o] LST: Headerfile upload API and browser drop target

  • [o] GUI: Interactively create data sinks, add decoding- and mapping-rules and …

  • [o] Start dogfeeding by collecting data from Kotori’s builtin metrics subsystem

  • [o] README: Add foreword about contemporary space ship design and afterword about

    testing, feedback, contributions and more use cases

  • [o] Documentation content license: Creative commons share-alike attribution non-commercial

  • [o] Documentation content attributions








  • [o] Use NodeUSB as Lua development platform and sensor network gateway adapter for WiFi devices

  • [o] Integrate with the OpenXC platform using OpenXC for Python

  • [o] Integrate with other cloud IoT platforms as up- or downstream unit

  • [o] Use the Funky v3 as canonical sensor node?


  • [o] “Kotori Box” demo setup on Raspberry Pi 3

  • [o] Improve docs

    • redesign page for ilaundry applications

    • short history sections for all applications

    • content policy / ownership


Milestone 1 - Kotori 0.6.0#

  • [x] Arbeit an der Dokumentation, siehe commits von gestern

  • [x] Vorbereitung des Release 0.6.0 im aktuellen Zustand mit den Doku Updates (die 0.5.1 ist vom 26. November)

  • [o] Release eines einigermaßen sauberen bzw. benutzbaren Debian Pakets

Milestone 2 - Kotori 0.7.0#

  • [x] Reguläres refactoring

  • [o] MQTT Topic

    • [o] Implementierung der “Content Type” Signalisierung über pseudo-Dateiendungen wie geplant (Inspired by Nick O’Leary and Jan-Piet Mens; Acked by Clemens and Richard):

      hiveeyes/testdrive/area42/hive3/temperature vs. hiveeyes/testdrive/area42/hive3.json
    • [o] Weitere Diskussion und Implementierung der “Direction” Signalisierung (Inspired by computourist, Pushed by Richard) Proposal: .../node3/{direction}/{sensor}.foo

  • [x] Generalisierung der BERadioNetworkApplication / HiveeyesApplication vendor Architektur

  • [o] Verbesserung der service-in-service Infrastruktur mit nativen Twisted service containern

  • [o] Flexiblere Anwendungsfälle ähnlich dem von Hiveeyes ermöglichen: mqtt topic first-level segment “hiveeyes/”

    (the “realm”) per Konfigurationsdatei bestimmen (Wunsch von Dazz)

  • [o] Einführung von Softwaretests

Hiveeyes Research#

Mit ein paar Dingen müssen wir uns noch stärker beschäftigen:

  • InfluxDB

    • Wie geht man am besten mit InfluxDB-nativen Tags in unserem Kontext um?

    • Bemerkung: Vielleicht war die Trennung auf Datenbank/Tableebene die falsche Strategie bzw. es gibt noch weitere, die orthogonal davon zusätzlich oder alternativ sinnvoll sind.

  • Grafana

    • Wie kann man hier die Tags aus InfluxDB am besten verarbeiten und in den Dashboards praktisch nutzen?

    • Wie funktionieren Annotations mit InfluxDB?

  • Notifications

    • Ausblick: mqttwarn besser mit Kotori integrieren (via API) und als universeller Nachrichtenvermittler auf betreiben.


  • [x] When sending:

    mosquitto_pub -h -t hiveeyes/testdrive/999/1/message-json -m '{"temperature": 42.84}'

first and afterwards:

mosquitto_pub -h -t hiveeyes/testdrive/area-42/1/message-json -m '{"temperature": 42.84}'

No new panel gets created:

2016-01-26T00:25:12+0100 [kotori.daq.graphing.grafana      ] INFO: Creating datasource "hiveeyes_testdrive"
2016-01-26T00:25:12+0100 [kotori.daq.graphing.grafana      ] INFO: Getting dashboard "hiveeyes_testdrive"
2016-01-26T00:25:12+0100 [kotori.daq.graphing.grafana      ] INFO: panels_exists_titles: [u'temp @ node=1,gw=999']
2016-01-26T00:25:12+0100 [kotori.daq.graphing.grafana      ] INFO: panels_new_titles:    ['temp']
2016-01-26T00:25:12+0100 [kotori.daq.graphing.grafana      ] INFO: No missing panels to add


  • [o] Grafana Manager: Create dashboard row per gateway in same network

  • [o] MQTT signals on thresholds

  • [o] Add email alerts on tresholds

  • [o] When sending whole bunches of measurements, ignore fields having leading underscores for Grafana panel creation

  • [o] The order of the Grafana panels (temperature, humidity, weight) works in Grafana 2.1.3, but not in Grafana 2.6.0

2016-01-27 A#

  • [x] systemd init script

  • [o] Send measurements by HTTP POST and UDP, republish to MQTT

  • [o] Mechanism / button to reset the “testdrive” database (or any other?).

    This is required when changing scalar types (e.g. str -> float64, etc.)

2016-01-27 B#

  • [o] Numbers and gauges about message throughput

  • [o] systemd init script for crossbar

2016-01-28 A#

2016-01-28 B#

  • [o] Improve error message if MQTT daemon isn’t listening - currently:

    2016-01-28T21:52:13+0100 [mqtt.client.factory.MQTTFactory  ] INFO: Starting factory <mqtt.client.factory.MQTTFactory instance at 0x7f5105e157a0>
    2016-01-28T21:52:13+0100 [mqtt.client.factory.MQTTFactory  ] INFO: Stopping factory <mqtt.client.factory.MQTTFactory instance at 0x7f5105e157a0>




Prio 1 - Showstoppers ====================-

Besides making it work in approx. 30 min. on the first hand (cheers!), there are some remaining issues making the wash&go usage of Kotori somehow inconvenient in day-to-day business. Let’s fix them.

  • Currently nothing on stack.

Prio 1.5 - Important#

  • [o] improve: lst-message sattracker send 0x090100000000000000 –target=udp://localhost:8889

    take –target from configuration, matching channel “sattracker”

  • [o] lst-message sattracker list-structs

  • [o] Field-level granularity for GrafanaManager to counter field-renaming by rule-adding problem

    i.e. if field “hdg” is renamed to “heading”, this won’t get reflected in Grafana automatically

  • [o] Honour annotation attribute “unit” when adding Grafana panels

  • [o] SymPy annotations should be able to declare virtual fields

  • [o] reduce logging

Prio 2#

  • [o] troubleshooting docs

    • sattracker-message decode 0x090200000100000000 configfile: etc/lst-h2m.ini 2015-11-24 21:52:09,325 [kotori.vendor.lst.commands] ERROR : Decoding binary data “0x090200000100000000” to struct failed. Struct with id 2 (0x2) not registered.

    • sattracker-message info struct_position2 configfile: etc/lst-h2m.ini 2015-11-24 21:52:58,642 [kotori.vendor.lst.commands] ERROR : No struct named “struct_position2”

  • [o] new message command h2m|sattracker-message list to show all struct names

  • [o] new “influxdb” maintenance command with e.g. “drop database”

  • [o] pyclibrary upstreaming: patches and ctor issue:

    Traceback (most recent call last):
      File "kotori/daq/intercom/", line 112, in <module>
      File "kotori/daq/intercom/", line 72, in main
        p = clib.struct_program() #(abc=9)
      File "/Users/amo/dev/foss/", line 230, in __getattr__
        obj = self(k, n)
      File "/Users/amo/dev/foss/", line 210, in __call__
        self._objs_[typ][name] = self._make_obj_(typ, name)
      File "/Users/amo/dev/foss/", line 277, in _make_obj_
        return self._get_struct('structs', n)
      File "/Users/amo/dev/foss/", line 294, in _get_struct
        (m[0], self._get_type(m[1]), m[2]) for m in defs]
    ValueError: number of bits invalid for bit field
  • [o] refactor config['_active_'] mechanics in lst/

Prio 3#

  • [o] sanity checks for struct schema e.g. against declared length

  • [o] Topic “measurement tightness” / “sending timestamps”

  • [o] Properly implement checksumming, honor field ck

    sum up all bytes: 0 to n-1 (w/o ck), then mod 255

  • [o] database export

  • [o] check with pyclibrary development branch:

  • [o] Intro to the H2M scenario with pictures, drawing, source code (header file) and nice Grafana graph

  • [o] Flexible pretending UDP sender programs for generating and sending message struct payloads

  • [o] Waveform publishers

  • [o] Bring xyz-message info|decode|list to the web

  • [o] Bring “Add Project” (c header file) to the web, including compilation error messages

  • [o] refactor classmethods of LibraryAdapter into separate LibraryAdapterFactory

  • [o] cache compilation step

  • [o] add link to Telemetry.cpp

  • [o] ctor syntax

  • [o] make issue @ pyclibrary re. brace-or-equal-initializers:

  • [o] highlevel influxdb client

  • [o] runtime-update of c struct or restart automatism
    • [o] Make brace-or-equal-initializers work properly.

      # brace-initializer
      : length(9), ID(1)
      # equal-initializer
      uint8_t  length = 9         ;//1
      uint8_t  ID     = 1         ;//2

      Unfortunately, pyclibrary croaks on the first variant.

      On the other hand, the Mbed compiler croaks on the second variant or the program fails to initialize the struct properly at runtime. Let’s investigate.

      1. => Make an issue @ upstream re. ctor syntax with small canonical example.

      2. => Investigate why the Mbed compiler doesn’t grok the equal-initializer style.

    • [o] Make infrastructure based on typedefs instead of structs to honor initializer semantics

  • [o] improve error handling (show full stacktrace in log or web frontend), especially when sending payloads to wrong handlers, e.g.:

    2015-11-26T11:30:12+0100 [kotori.daq.intercom.udp          ] INFO: Received via UDP from 0x303b303b32332e37353b35312e3033323b2d302e303136
    2015-11-26T11:30:12+0100 [kotori.daq.intercom.c            ] ERROR: Struct with id 59 (0x3b) not registered.
    2015-11-26T11:30:12+0100 [twisted.internet.defer           ] CRITICAL: Unhandled error in Deferred:

Prio 4#


  • [x] Rename repository to “kotori”

  • [x] Publish docs to

  • [x] Proper commandline interface for encoding and decoding message structs à la beradio

  • [x] Publish docs to

  • [x] The order of fields provisioned into Grafana panel is wrong due to unordered-dict-republishing on Bus
    • Example: “03_cap_w” has “voltage_low, voltage_mid, voltage_load, voltage_max, …”

      but should be “voltage_low, voltage_mid, voltage_max, voltage_load, …”

    • Proposal: Either publish something self-contained to the Bus which reflects the very order,

      or add some bookkeeping (a struct->fieldname registry) at the decoding level, where order is correct. Reuse this information when creating the Grafana stuff.

    • Solution: Send data as list of lists to the WAMP bus.

  • [x] kotori.daq.intercom.c should perform the compilation step for getting a out of a msglib.h

  • [x] decouple main application from self.config[‘lst-h2m’]

  • [x] unsanitized log output exception:

    2015-11-20T16:56:57+0100 [        ] INFO: Storage location:  {'series': '01_position', 'database': u'edu_hm_lst_sattracker'}
    2015-11-20T16:56:57+0100 [        ] ERROR: InfluxDBClientError: 401: {"error":"user not found"}
    2015-11-20T16:56:57+0100 [        ] ERROR: Unable to format event {'log_namespace': '', 'log_level': <LogLevel=error>, 'log_logger': <Logger ''>, 'log_time': 1448035017.722721, 'log_source': None, 'log_format': 'Processing Bus message failed: 401: {"error":"user not found"}\nERROR: InfluxDBClientError: 401: {{"error":"user not found"}}\n\n============================================================\nEntry point:\nFilename:    /home/basti/kotori/kotori/daq/storage/\nLine number: 171\nFunction:    bus_receive\nCode:        return self.process_message(self.topic, payload)\n============================================================\nSource of exception:\nFilename:    /home/basti/kotori/.venv27/local/lib/python2.7/site-packages/influxdb-2.9.2-py2.7.egg/influxdb/\nLine number: 247\nFunction:    request\nCode:        raise InfluxDBClientError(response.content, response.status_code)\n\nTraceback (most recent call last):\n  File "/home/basti/kotori/kotori/daq/storage/", line 171, in bus_receive\n    return self.process_message(self.topic, payload)\n  File "/home/basti/kotori/kotori/daq/storage/", line 195, in process_message\n    self.store_mes
  • [x] non-ascii “char” value can’t be published to WAMP Bus

    send message:

    sattracker-message send 0x09010000fe0621019c --target=udp://localhost:8889


    2015-11-20T17:32:29+0100 [kotori.daq.intercom.udp          ] INFO: Received via UDP from '\t\x01\x00\x00@\x06H\x01\xf2'
    2015-11-20T17:32:29+0100 [kotori.daq.intercom.udp          ] INFO: Publishing to topic '' with realm 'lst': [(u'length', 9), (u'ID', 1), (u'flag_1', 0), (u'hdg', 1600), (u'pitch', 328), (u'ck', '\xf2'), ('_name_', u'struct_position'), ('_hex_', '0901000040064801f2')]
    2015-11-20T17:32:29+0100 [twisted.internet.defer           ] CRITICAL: Unhandled error in Deferred:
    Traceback (most recent call last):
      File "/home/basti/kotori/kotori/daq/intercom/", line 32, in datagramReceived
        yield self.bus.publish(self.topic, data_out)
      File "/home/basti/kotori/.venv27/local/lib/python2.7/site-packages/autobahn-0.10.9-py2.7.egg/autobahn/wamp/", line 1034, in publish
        raise e
    autobahn.wamp.exception.SerializationError: WAMP serialization error ('ascii' codec can't decode byte 0xf2 in position 1: ordinal not in range(128))
  • [x] Make compiler configurable (/usr/bin/g++ on Linux vs. /opt/local/bin/g++-mp-5 on OSX)

  • [x] Field type conflicts in InfluxDB, e.g. when adding a transformation rule on the same name, this changing the data type on an existing field:

      2015-11-22T17:00:52+0100 [        ] ERROR: Processing Bus message failed: 400: write failed: field type conflict: input field "pitch" on measurement "01_position" is type float64, already exists as type integer
          ERROR: InfluxDBClientError: 400: write failed: field type conflict: input field "pitch" on measurement "01_position" is type float64, already exists as type integer
    Here, "pitch" was initially coming in as an Integer, but now has changed its type to a Float64,
    due to applying a transformation rule, which (always) yields floats.
    | => Is it possible (and appropriate) to ALTER TABLE on demand?
    | => At least add possibility to drop database via Web.
    - [x] Upgrade to python module "influxdb-2.10.0" => didn't help
    - [x] Store all numerical data as floats
  • [x] C Header parsing convenience

    • [x] Automatically add #include "stdint.h" (required for types uint8_t, etc.) and

      remove #include "mbed.h" (croaks on Intel)

    • [x] Improve transcoding convenience by using annotations like

      // name=heading; expr=hdg * 20; unit=degrees, see Math expressions. Use it for renaming fields and scaling values in Kotori and assigning units in Grafana. => Implemented based on SymPy, use it for flexible scaling.

  • [x] proper error message when decoding unknown message

  • [x] rename lst-h2m.ini to lst.ini

  • [x] generalize h2m-message vs. sattracker-message into lst-message,

    maybe read default config via ~/.kotori.ini which transitively points to ./etc/lst.ini to keep the comfort. otherwise, the ini file must be specified every time. Other variant: export KOTORI_CONFIG=/etc/kotori/lst.ini

  • [x] document how to add a new channel

  • [x] document rule-based Transformations
    • syntax

    • math expressions

    • sattracker-message transform

  • [x] add to docs:


Prio 1#

  • [x] Fix dashboard creation

  • [o] Don’t always do CREATE DATABASE hiveeyes_3733a169_70d2_450b_b717_6f002a13716b

    see: root@elbanco:~# tail -f /var/log/influxdb/influxd.log

  • [o] Receive timestamp from MQTT and use this one
  • [o] Use UDP for sending measurement points to InfluxDB:

    cli = InfluxDBClient.from_DSN(‘udp+influxdb://username:pass@localhost:8086/databasename’, timeout=5, udp_port=159)

Prio 2#

  • [o] Improve inline docs

  • [o] License and open sourcing

  • [o] Enhance mechanism of how GrafanaManager (re)creates dashboard, when deleted by user at runtime.

    Currently, dashboards are only created on packages arriving after a Kotori restart. They are never ever deleted automatically right now.


  • [x] Sort “collect_fields” result before passing to grafana manager

  • [x] investigate and improve mqtt connection robustness and recycling:

    - MQTTFactory shuts down after exception when storing via InfluxDB::
              File "/home/kotori/develop/kotori-daq/src/kotori.node/kotori/daq/storage/", line 101, in write_real
                response = self.influx.write_points([self.v08_to_09(chunk)])
              File "/home/kotori/develop/kotori-daq/.venv27/local/lib/python2.7/site-packages/influxdb-2.9.2-py2.7.egg/influxdb/", line 387, in write_points
              File "/home/kotori/develop/kotori-daq/.venv27/local/lib/python2.7/site-packages/influxdb-2.9.2-py2.7.egg/influxdb/", line 432, in _write_points
              File "/home/kotori/develop/kotori-daq/.venv27/local/lib/python2.7/site-packages/influxdb-2.9.2-py2.7.egg/influxdb/", line 277, in write
              File "/home/kotori/develop/kotori-daq/.venv27/local/lib/python2.7/site-packages/influxdb-2.9.2-py2.7.egg/influxdb/", line 247, in request
                raise InfluxDBClientError(response.content, response.status_code)
            influxdb.exceptions.InfluxDBClientError: 400: unable to parse 'w.t ': invalid field format
        2015-10-20 06:12:59+0200 [-] Stopping factory <mqtt.client.factory.MQTTFactory instance at 0x7fda346ccb48>


Prio 1#

  • [x] node registration: send hostname along

  • [o] node_id-to-label translator with server-side persistence at master node

  • [o] run as init.d daemon

Prio 2#

  • [o] show embedded video when node signals activity

  • [o] Bug when speaking umlauts, like “Bolognesääää!”:

    2014-01-13 20:01:24+0100 [MasterServerProtocol,5,] Traceback (most recent call last):
    2014-01-13 20:01:24+0100 [MasterServerProtocol,5,]   File ".venv27/local/lib/python2.7/site-packages/autobahn-0.7.0-py2.7.egg/autobahn/", line 863, in onMessage
    2014-01-13 20:01:24+0100 [MasterServerProtocol,5,]     self.factory.dispatch(topicUri, event, exclude, eligible)
    2014-01-13 20:01:24+0100 [MasterServerProtocol,5,]   File ".venv27/local/lib/python2.7/site-packages/autobahn-0.7.0-py2.7.egg/autobahn/", line 1033, in dispatch
    2014-01-13 20:01:24+0100 [MasterServerProtocol,5,]     log.msg("publish event %s for topicUri %s" % (str(event), topicUri))
    2014-01-13 20:01:24+0100 [MasterServerProtocol,5,] UnicodeEncodeError: 'ascii' codec can't encode characters in position 8-12: ordinal not in range(128)

Prio 3#


Milestone 1 ==========- - dynamic receiver channels - realtime scope views: embed grafana Graphs or render directly e.g. using Rickshaw.js?

Milestone 2 ==========- - pdf renderer - derivation and integration