[vz-users] Middleware-Abfrage 4-Wochen Intervall

Andreas Götz cpuidle at gmail.com
Mon Jan 25 20:23:34 CET 2016


Argh, Denkfehler. Kann es sein dass Du bereits nach Wochen aggregierst? Falls ja lösch für den ersten Test bitte mal die Aggregationstabelle. Wenn das klappt sollte es auch danach gehen- nur stehen da ja und leider schon die falschen Daten drin...

Viele Grüße, Andreas 

> Am 25.01.2016 um 20:06 schrieb application MGR <applicationMGR at ecoCuyo.de>:
> 
> H
> Hallo Andreas,
> hier das Ergebnis mit &debug=1:
> {
> 	"version": "0.3",
> 	"data": [
> 		{
> 			"tuples": [
> 				[
> 					1451170785000,
> 					136.755,
> 					39831
> 				],
> 				[
> 					1451602785000,
> 					188.663,
> 					28347
> 				],
> 				[
> 					1451775585000,
> 					42.931,
> 					11520
> 				],
> 				[
> 					1452380385000,
> 					101.307,
> 					39865
> 				],
> 				[
> 					1452985185000,
> 					110.44,
> 					39772
> 				],
> 				[
> 					1452985230000,
> 					0,
> 					3
> 				]
> 			],
> 			"uuid": “xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
> 			"from": 1450565985000,
> 			"to": 1452985230000,
> 			"min": [
> 				1452985230000,
> 				0
> 			],
> 			"max": [
> 				1451602785000,
> 				188.6625
> 			],
> 			"average": 123.879,
> 			"consumption": 83248.5,
> 			"rows": 7
> 		}
> 	],
> 	"debug": {
> 		"level": "1",
> 		"database": "pdo_mysql",
> 		"time": 2.85656,
> 		"uptime": 558679120,
> 		"load": [
> 			0.41,
> 			0.35,
> 			0.33
> 		],
> 		"commit-hash": "3bbeb9f96e7b9a795b1fdd38f272946dfc6064f8",
> 		"php-version": "5.6.14-0+deb8u1",
> 		"messages": [],
> 		"sql": {
> 			"totalTime": 2.8117651939392,
> 			"worstTime": 1.4451360702515,
> 			"queries": [
> 				{
> 					"sql": "SELECT MIN(timestamp) FROM (SELECT timestamp FROM data WHERE channel_id=? AND timestamp<? ORDER BY timestamp DESC LIMIT 2) t",
> 					"params": [
> 						3,
> 						1450566000000
> 					],
> 					"types": [],
> 					"executionMS": 1.4451360702515
> 				},
> 				{
> 					"sql": "SELECT MAX(timestamp) FROM (SELECT timestamp FROM data WHERE channel_id=? AND timestamp>? ORDER BY timestamp ASC LIMIT 2) t",
> 					"params": [
> 						3,
> 						1452985200000
> 					],
> 					"types": [],
> 					"executionMS": 0.020693063735962
> 				},
> 				{
> 					"sql": "SELECT aggregate.type, COUNT(aggregate.id) AS count FROM aggregate INNER JOIN entities ON aggregate.channel_id = entities.id WHERE uuid = ? AND aggregate.type <= ? GROUP BY type HAVING count > 0 ORDER BY type DESC",
> 					"params": [
> 						"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
> 						4
> 					],
> 					"types": [],
> 					"executionMS": 1.3102521896362
> 				},
> 				{
> 					"sql": "SELECT UNIX_TIMESTAMP(FROM_UNIXTIME(MIN(timestamp) \/ 1000, \"%Y-%m-%d\")) * 1000 FROM aggregate WHERE channel_id=? AND type=? AND      UNIX_TIMESTAMP(FROM_UNIXTIME(timestamp \/ 1000, \"%Y-%m-%d\")) * 1000 >=?",
> 					"params": [
> 						3,
> 						3,
> 						1450565970000
> 					],
> 					"types": [],
> 					"executionMS": 0.009148120880127
> 				},
> 				{
> 					"sql": "SELECT UNIX_TIMESTAMP(DATE_ADD(FROM_UNIXTIME(MAX(timestamp) \/ 1000, \"%Y-%m-%d\"), INTERVAL 1 day)) * 1000 FROM aggregate WHERE channel_id=? AND type=? AND timestamp<?",
> 					"params": [
> 						3,
> 						3,
> 						1452985230000
> 					],
> 					"types": [],
> 					"executionMS": 0.0017869472503662
> 				},
> 				{
> 					"sql": "SELECT COUNT(1) FROM (SELECT DISTINCT YEAR(FROM_UNIXTIME(timestamp\/1000)), YEARWEEK(FROM_UNIXTIME(timestamp\/1000)) FROM data WHERE channel_id = ? AND ( timestamp >= ? AND timestamp < ? OR timestamp >= ? AND timestamp <= ?) UNION SELECT DISTINCT YEAR(FROM_UNIXTIME(timestamp\/1000)), YEARWEEK(FROM_UNIXTIME(timestamp\/1000)) FROM aggregate WHERE channel_id = ? AND type = ? AND timestamp >= ? AND timestamp < ?) AS agg",
> 					"params": [
> 						3,
> 						1450565970000,
> 						"1450566000000",
> 						"1452985200000",
> 						1452985230000,
> 						3,
> 						3,
> 						"1450566000000",
> 						"1452985200000"
> 					],
> 					"types": [],
> 					"executionMS": 0.0064718723297119
> 				},
> 				{
> 					"sql": "SELECT MAX(timestamp) AS timestamp, MAX(value) AS value, SUM(count) AS count FROM (SELECT timestamp, value, 1 AS count FROM data WHERE channel_id = ? AND ( timestamp >= ? AND timestamp < ? OR timestamp >= ? AND timestamp <= ?) UNION SELECT timestamp, value, count FROM aggregate WHERE channel_id = ? AND type = ? AND timestamp >= ? AND timestamp < ?) AS agg GROUP BY YEAR(FROM_UNIXTIME(timestamp\/1000)), YEARWEEK(FROM_UNIXTIME(timestamp\/1000)) ORDER BY timestamp ASC",
> 					"params": [
> 						3,
> 						1450565970000,
> 						"1450566000000",
> 						"1452985200000",
> 						1452985230000,
> 						3,
> 						3,
> 						"1450566000000",
> 						"1452985200000"
> 					],
> 					"types": [],
> 					"executionMS": 0.018276929855347
> 				}
> 			]
> 		}
> 	}
> }
> 
> 
> 
> Viele Grüße
> Armin
> 
> 
>> Am 23.01.2016 um 19:54 schrieb Andreas Götz <cpuidle at gmail.com>:
>> 
>> Mhm. Häng mal bitte ein &debug=1 an und poste das ergebnis. Würde gerne das sql sehen...
>> 
>> Viele Grüße, Andreas 
>> 
>>> Am 23.01.2016 um 19:26 schrieb application MGR <applicationMGR at ecoCuyo.de>:
>>> 
>>> Hallo Andreas,
>>> 
>>> hat leider nicht den erwünschten Effekt gebracht. Dieselbe Abfrage 
>>> 
>>> http://<localhost>/middleware.php/data.json?from=1450566000000&to=1452985200000&group=week&%20uuid%5B%5D=xxxxxxxx-xxxx-xxxx-xxxx-695ed220d33d
>>> 
>>> wirft nun folgendes aus:
>>> 
>>> {"version":"0.3","data”:[
>>> 	{
>>> 		"tuples”:[
>>> 			[1451170785000,136.755,39831],	->	26.12.15 23:59:45
>>> 			[1451602785000,188.663,28347],	->	31.12.15 23:59:45
>>> 			[1451775585000,42.931,11520],	->	02.01.16 23:59:45
>>> 			[1452380385000,101.307,39865],	->	09.01.16 23:59:45
>>> 			[1452985185000,110.44,39772],	->	16.01.16 23:59:45
>>> 			[1452985230000,0,3]			->	17.01.16 00:00:30
>>> 		],
>>>>>> 	}
>>> ]}
>>> 
>>> Hab noch nicht recherchiert, was sonst geändert werden muss …
>>> 
>>> Schönes Wochenende
>>> Armin
>>> 
>>> 
>>>> Am 21.01.2016 um 19:29 schrieb Andreas Goetz <cpuidle at gmail.com>:
>>>> 
>>>> Keine Ahnung.... funktionierts denn?
>>>> 
>>>> 2016-01-21 19:15 GMT+01:00 application MGR <applicationMGR at ecocuyo.de>:
>>>>> Hallo Andreas,
>>>>> 
>>>>> na mir wäre es natürlich recht, nur will ich nichts verschlimmbessern, falls jemand das Feature genau so braucht.
>>>>> 
>>>>> Gibt es nachteilige Nebenwirkungen, die bedacht werden müssen?
>>>>> 
>>>>> Best Grüße
>>>>> Armin
>>>>> 
>>>>>>> Am 21.01.2016 um 13:14 schrieb Andreas Goetz <cpuidle at gmail.com>:
>>>>>>> 
>>>>>>> 2016-01-21 12:29 GMT+01:00 application MGR <applicationMGR at ecocuyo.de>:
>>>>>>> Hallo @
>>>>>>> 
>>>>>>> mit folgender Abfrage mit Startdatum 20.12.15 00:00:00 und Enddatum 17.01.16 00:00:00:
>>>>>>> 
>>>>>>> http://<localhost>/middleware.php/data.json?from=1450566000000&to=1452985200000&group=week&%20uuid%5B%5D=xxxxxxxx-xxxx-xxxx-xxxx-695ed220d33d
>>>>>>> 
>>>>>>> erhalte ich folgende Antwort von der Middleware:
>>>>>>> 
>>>>>>> {"version":"0.3","data”:[
>>>>>>> 	{
>>>>>>> 		"tuples”:[
>>>>>>> 				[1451257185000,159.436,39831], 		-> 	27.12.15 23:59:45
>>>>>>> 				[1451602785000,155.903,22587],		->	31.12.15 23:59:45	
>>>>>>> 				[1451861985000,50.125,17280],		->	03.01.16 23:59:45
>>>>>>> 				[1452466785000,138.161,39864],		->	10.01.16 23:59:45
>>>>>>> 				[1452985230000,75.092,34016]		->	17.01.16 00:00:30
>>>>>>> 			],
>>>>>>> ...
>>>>>>> 	}
>>>>>>> ]}
>>>>>>> 
>>>>>>> Wer kann mir sagen, warum der 31.12. hier als tuple mit-ausgegeben wird? Hat der letzte Tag des Jahres eine besondere Bedeutung für die group=week?
>>>>>> 
>>>>>> Sehr gute Frage. Die Antwort liegt hier: https://github.com/volkszaehler/volkszaehler.org/blob/master/lib/Interpreter/SQL/MySQLOptimizer.php#L63
>>>>>> 
>>>>>> Mittlerweile (?) gibts bei MySQL eine Funktion YEARWEEK() die das gewünschte Ergebnis bringen sollte.
>>>>>> 
>>>>>>> 
>>>>>>> Viele Grüße
>>>>>>> Armin
>>>>>> 
>>>>>> Wollen wir das ändern?
>>>>>> 
>>>>>> Viele Grüße,
>>>>>> Andreas
>>>>> 
>>>> 
>>> 
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://demo.volkszaehler.org/pipermail/volkszaehler-users/attachments/20160125/78b8689d/attachment.html>


More information about the volkszaehler-users mailing list