Grafana pod keeps restarting after helm install
up vote
1
down vote
favorite
I have a clean AKS cluster that I deployed the prometheus-operator chart. The Grafana pod is showing a ton of restarts. My cluster version is 1.11.3. Grafana logs below. Anyone else encounter this issue?
File in configmap grafana-dashboard-k8s-node-rsrc-use.json ADDED
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 543, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 302, in _error_catcher
yield
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 598, in read_chunked
self._update_chunk_length()
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 547, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/sidecar.py", line 58, in <module>
main()
File "/app/sidecar.py", line 54, in main
watchForChanges(label, targetFolder)
File "/app/sidecar.py", line 23, in watchForChanges
for event in w.stream(v1.list_config_map_for_all_namespaces):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 124, in stream
for line in iter_resp_lines(resp):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 45, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 626, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 320, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
kubernetes grafana kubernetes-helm azure-kubernetes prometheus-operator
add a comment |
up vote
1
down vote
favorite
I have a clean AKS cluster that I deployed the prometheus-operator chart. The Grafana pod is showing a ton of restarts. My cluster version is 1.11.3. Grafana logs below. Anyone else encounter this issue?
File in configmap grafana-dashboard-k8s-node-rsrc-use.json ADDED
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 543, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 302, in _error_catcher
yield
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 598, in read_chunked
self._update_chunk_length()
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 547, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/sidecar.py", line 58, in <module>
main()
File "/app/sidecar.py", line 54, in main
watchForChanges(label, targetFolder)
File "/app/sidecar.py", line 23, in watchForChanges
for event in w.stream(v1.list_config_map_for_all_namespaces):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 124, in stream
for line in iter_resp_lines(resp):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 45, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 626, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 320, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
kubernetes grafana kubernetes-helm azure-kubernetes prometheus-operator
Looks like you have a python sidecar. Do you have the deployment/pod definition for grafana?
– Rico
Nov 10 at 2:02
Yes, there are three containers in the pod. kiwigrid/k8s-sidecar:0.0.3 kiwigrid/k8s-sidecar:0.0.3 grafana/grafana:5.3.1
– Jerry Joyce
Nov 12 at 17:34
What did you use to install this? the guide I followed doesn't have sidecars
– Rico
Nov 12 at 18:13
helm install stable/prometheus-operator
– Jerry Joyce
Nov 13 at 19:18
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I have a clean AKS cluster that I deployed the prometheus-operator chart. The Grafana pod is showing a ton of restarts. My cluster version is 1.11.3. Grafana logs below. Anyone else encounter this issue?
File in configmap grafana-dashboard-k8s-node-rsrc-use.json ADDED
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 543, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 302, in _error_catcher
yield
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 598, in read_chunked
self._update_chunk_length()
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 547, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/sidecar.py", line 58, in <module>
main()
File "/app/sidecar.py", line 54, in main
watchForChanges(label, targetFolder)
File "/app/sidecar.py", line 23, in watchForChanges
for event in w.stream(v1.list_config_map_for_all_namespaces):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 124, in stream
for line in iter_resp_lines(resp):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 45, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 626, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 320, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
kubernetes grafana kubernetes-helm azure-kubernetes prometheus-operator
I have a clean AKS cluster that I deployed the prometheus-operator chart. The Grafana pod is showing a ton of restarts. My cluster version is 1.11.3. Grafana logs below. Anyone else encounter this issue?
File in configmap grafana-dashboard-k8s-node-rsrc-use.json ADDED
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 543, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 302, in _error_catcher
yield
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 598, in read_chunked
self._update_chunk_length()
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 547, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/sidecar.py", line 58, in <module>
main()
File "/app/sidecar.py", line 54, in main
watchForChanges(label, targetFolder)
File "/app/sidecar.py", line 23, in watchForChanges
for event in w.stream(v1.list_config_map_for_all_namespaces):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 124, in stream
for line in iter_resp_lines(resp):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 45, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 626, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 320, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
kubernetes grafana kubernetes-helm azure-kubernetes prometheus-operator
kubernetes grafana kubernetes-helm azure-kubernetes prometheus-operator
edited Nov 9 at 23:48
Emruz Hossain
81217
81217
asked Nov 9 at 18:51
Jerry Joyce
30615
30615
Looks like you have a python sidecar. Do you have the deployment/pod definition for grafana?
– Rico
Nov 10 at 2:02
Yes, there are three containers in the pod. kiwigrid/k8s-sidecar:0.0.3 kiwigrid/k8s-sidecar:0.0.3 grafana/grafana:5.3.1
– Jerry Joyce
Nov 12 at 17:34
What did you use to install this? the guide I followed doesn't have sidecars
– Rico
Nov 12 at 18:13
helm install stable/prometheus-operator
– Jerry Joyce
Nov 13 at 19:18
add a comment |
Looks like you have a python sidecar. Do you have the deployment/pod definition for grafana?
– Rico
Nov 10 at 2:02
Yes, there are three containers in the pod. kiwigrid/k8s-sidecar:0.0.3 kiwigrid/k8s-sidecar:0.0.3 grafana/grafana:5.3.1
– Jerry Joyce
Nov 12 at 17:34
What did you use to install this? the guide I followed doesn't have sidecars
– Rico
Nov 12 at 18:13
helm install stable/prometheus-operator
– Jerry Joyce
Nov 13 at 19:18
Looks like you have a python sidecar. Do you have the deployment/pod definition for grafana?
– Rico
Nov 10 at 2:02
Looks like you have a python sidecar. Do you have the deployment/pod definition for grafana?
– Rico
Nov 10 at 2:02
Yes, there are three containers in the pod. kiwigrid/k8s-sidecar:0.0.3 kiwigrid/k8s-sidecar:0.0.3 grafana/grafana:5.3.1
– Jerry Joyce
Nov 12 at 17:34
Yes, there are three containers in the pod. kiwigrid/k8s-sidecar:0.0.3 kiwigrid/k8s-sidecar:0.0.3 grafana/grafana:5.3.1
– Jerry Joyce
Nov 12 at 17:34
What did you use to install this? the guide I followed doesn't have sidecars
– Rico
Nov 12 at 18:13
What did you use to install this? the guide I followed doesn't have sidecars
– Rico
Nov 12 at 18:13
helm install stable/prometheus-operator
– Jerry Joyce
Nov 13 at 19:18
helm install stable/prometheus-operator
– Jerry Joyce
Nov 13 at 19:18
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
Based on the Prometheus operator repository... The sidecar container on the Grafana pod is failing to contact Grafana and reload/refresh the dashboards defined on the configmap being watched.
So the this is a symptom of the Grafana container failing... can you check Grafana container inside your Grafana pod logs?
The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
– Jerry Joyce
Nov 12 at 17:38
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
Based on the Prometheus operator repository... The sidecar container on the Grafana pod is failing to contact Grafana and reload/refresh the dashboards defined on the configmap being watched.
So the this is a symptom of the Grafana container failing... can you check Grafana container inside your Grafana pod logs?
The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
– Jerry Joyce
Nov 12 at 17:38
add a comment |
up vote
0
down vote
Based on the Prometheus operator repository... The sidecar container on the Grafana pod is failing to contact Grafana and reload/refresh the dashboards defined on the configmap being watched.
So the this is a symptom of the Grafana container failing... can you check Grafana container inside your Grafana pod logs?
The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
– Jerry Joyce
Nov 12 at 17:38
add a comment |
up vote
0
down vote
up vote
0
down vote
Based on the Prometheus operator repository... The sidecar container on the Grafana pod is failing to contact Grafana and reload/refresh the dashboards defined on the configmap being watched.
So the this is a symptom of the Grafana container failing... can you check Grafana container inside your Grafana pod logs?
Based on the Prometheus operator repository... The sidecar container on the Grafana pod is failing to contact Grafana and reload/refresh the dashboards defined on the configmap being watched.
So the this is a symptom of the Grafana container failing... can you check Grafana container inside your Grafana pod logs?
answered Nov 11 at 15:25
Carlos
211
211
The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
– Jerry Joyce
Nov 12 at 17:38
add a comment |
The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
– Jerry Joyce
Nov 12 at 17:38
The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
– Jerry Joyce
Nov 12 at 17:38
The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
– Jerry Joyce
Nov 12 at 17:38
add a comment |
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53231716%2fgrafana-pod-keeps-restarting-after-helm-install%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Looks like you have a python sidecar. Do you have the deployment/pod definition for grafana?
– Rico
Nov 10 at 2:02
Yes, there are three containers in the pod. kiwigrid/k8s-sidecar:0.0.3 kiwigrid/k8s-sidecar:0.0.3 grafana/grafana:5.3.1
– Jerry Joyce
Nov 12 at 17:34
What did you use to install this? the guide I followed doesn't have sidecars
– Rico
Nov 12 at 18:13
helm install stable/prometheus-operator
– Jerry Joyce
Nov 13 at 19:18