Azure cosmodb site unreachable
Multi tool use
up vote
-1
down vote
favorite
Portal.azure.com is not working sometimes, It just shows site unreachable when browsing. Most of time it works fine. Is anyone else facing this issue. It is not issue with my internet.
Error Page
And also I have seen a lot of posts about it but no solution, Is the max file size 2MB in azure cosmodb? Is there a way I can increase the size like compressing or any other work around for that?
azure azure-cosmosdb
add a comment |
up vote
-1
down vote
favorite
Portal.azure.com is not working sometimes, It just shows site unreachable when browsing. Most of time it works fine. Is anyone else facing this issue. It is not issue with my internet.
Error Page
And also I have seen a lot of posts about it but no solution, Is the max file size 2MB in azure cosmodb? Is there a way I can increase the size like compressing or any other work around for that?
azure azure-cosmosdb
There's really no way to know why you can't reach the Azure portal. It could be related to your local network, your ISP, etc. In any event, that's not on-topic for Stack Overflow. As for the maximum document size: as documented, it's 2MB. You cannot increase it. But curious why you feel that you need more than 2MB per document. If you're storing arrays that can grow over time, with no limit, this is an "unbounded array" situation that runs the risk of breaking your model at some point.
– David Makogon
Nov 10 at 22:13
Thank you for responding, Since i posted this issue I was not able to reach portal at all, previously it was just intermittent. I am trying to use Cosmodb as my document db instead of Mongo and I have files that are larger than 2MB. So looking for a way to save the documents in cosmodb.
– v.n
Nov 12 at 15:56
I'd take a careful look at your document structure to determine why (or if) it really needs to be so large. For instance: Are you storing data that's never searched/indexed? Perfect for moving to alternate storage e.g. blob. Storing large trees of subdocuments? Perhaps split into separate top-level referenced documents. Storing an ever-growing array? Consider moving those array elements to separate documents. Same guidance as with working with MongoDB (except for different max-document limit).
– David Makogon
Nov 12 at 18:25
Sorry for the delay, It was just a network issue and we were able to solve it. And we moved to blob storage to handle the file size and using SQL for storing any metadata.
– v.n
yesterday
add a comment |
up vote
-1
down vote
favorite
up vote
-1
down vote
favorite
Portal.azure.com is not working sometimes, It just shows site unreachable when browsing. Most of time it works fine. Is anyone else facing this issue. It is not issue with my internet.
Error Page
And also I have seen a lot of posts about it but no solution, Is the max file size 2MB in azure cosmodb? Is there a way I can increase the size like compressing or any other work around for that?
azure azure-cosmosdb
Portal.azure.com is not working sometimes, It just shows site unreachable when browsing. Most of time it works fine. Is anyone else facing this issue. It is not issue with my internet.
Error Page
And also I have seen a lot of posts about it but no solution, Is the max file size 2MB in azure cosmodb? Is there a way I can increase the size like compressing or any other work around for that?
azure azure-cosmosdb
azure azure-cosmosdb
edited Nov 14 at 8:34
CHEEKATLAPRADEEP-MSFT
1,926413
1,926413
asked Nov 10 at 0:02
v.n
1
1
There's really no way to know why you can't reach the Azure portal. It could be related to your local network, your ISP, etc. In any event, that's not on-topic for Stack Overflow. As for the maximum document size: as documented, it's 2MB. You cannot increase it. But curious why you feel that you need more than 2MB per document. If you're storing arrays that can grow over time, with no limit, this is an "unbounded array" situation that runs the risk of breaking your model at some point.
– David Makogon
Nov 10 at 22:13
Thank you for responding, Since i posted this issue I was not able to reach portal at all, previously it was just intermittent. I am trying to use Cosmodb as my document db instead of Mongo and I have files that are larger than 2MB. So looking for a way to save the documents in cosmodb.
– v.n
Nov 12 at 15:56
I'd take a careful look at your document structure to determine why (or if) it really needs to be so large. For instance: Are you storing data that's never searched/indexed? Perfect for moving to alternate storage e.g. blob. Storing large trees of subdocuments? Perhaps split into separate top-level referenced documents. Storing an ever-growing array? Consider moving those array elements to separate documents. Same guidance as with working with MongoDB (except for different max-document limit).
– David Makogon
Nov 12 at 18:25
Sorry for the delay, It was just a network issue and we were able to solve it. And we moved to blob storage to handle the file size and using SQL for storing any metadata.
– v.n
yesterday
add a comment |
There's really no way to know why you can't reach the Azure portal. It could be related to your local network, your ISP, etc. In any event, that's not on-topic for Stack Overflow. As for the maximum document size: as documented, it's 2MB. You cannot increase it. But curious why you feel that you need more than 2MB per document. If you're storing arrays that can grow over time, with no limit, this is an "unbounded array" situation that runs the risk of breaking your model at some point.
– David Makogon
Nov 10 at 22:13
Thank you for responding, Since i posted this issue I was not able to reach portal at all, previously it was just intermittent. I am trying to use Cosmodb as my document db instead of Mongo and I have files that are larger than 2MB. So looking for a way to save the documents in cosmodb.
– v.n
Nov 12 at 15:56
I'd take a careful look at your document structure to determine why (or if) it really needs to be so large. For instance: Are you storing data that's never searched/indexed? Perfect for moving to alternate storage e.g. blob. Storing large trees of subdocuments? Perhaps split into separate top-level referenced documents. Storing an ever-growing array? Consider moving those array elements to separate documents. Same guidance as with working with MongoDB (except for different max-document limit).
– David Makogon
Nov 12 at 18:25
Sorry for the delay, It was just a network issue and we were able to solve it. And we moved to blob storage to handle the file size and using SQL for storing any metadata.
– v.n
yesterday
There's really no way to know why you can't reach the Azure portal. It could be related to your local network, your ISP, etc. In any event, that's not on-topic for Stack Overflow. As for the maximum document size: as documented, it's 2MB. You cannot increase it. But curious why you feel that you need more than 2MB per document. If you're storing arrays that can grow over time, with no limit, this is an "unbounded array" situation that runs the risk of breaking your model at some point.
– David Makogon
Nov 10 at 22:13
There's really no way to know why you can't reach the Azure portal. It could be related to your local network, your ISP, etc. In any event, that's not on-topic for Stack Overflow. As for the maximum document size: as documented, it's 2MB. You cannot increase it. But curious why you feel that you need more than 2MB per document. If you're storing arrays that can grow over time, with no limit, this is an "unbounded array" situation that runs the risk of breaking your model at some point.
– David Makogon
Nov 10 at 22:13
Thank you for responding, Since i posted this issue I was not able to reach portal at all, previously it was just intermittent. I am trying to use Cosmodb as my document db instead of Mongo and I have files that are larger than 2MB. So looking for a way to save the documents in cosmodb.
– v.n
Nov 12 at 15:56
Thank you for responding, Since i posted this issue I was not able to reach portal at all, previously it was just intermittent. I am trying to use Cosmodb as my document db instead of Mongo and I have files that are larger than 2MB. So looking for a way to save the documents in cosmodb.
– v.n
Nov 12 at 15:56
I'd take a careful look at your document structure to determine why (or if) it really needs to be so large. For instance: Are you storing data that's never searched/indexed? Perfect for moving to alternate storage e.g. blob. Storing large trees of subdocuments? Perhaps split into separate top-level referenced documents. Storing an ever-growing array? Consider moving those array elements to separate documents. Same guidance as with working with MongoDB (except for different max-document limit).
– David Makogon
Nov 12 at 18:25
I'd take a careful look at your document structure to determine why (or if) it really needs to be so large. For instance: Are you storing data that's never searched/indexed? Perfect for moving to alternate storage e.g. blob. Storing large trees of subdocuments? Perhaps split into separate top-level referenced documents. Storing an ever-growing array? Consider moving those array elements to separate documents. Same guidance as with working with MongoDB (except for different max-document limit).
– David Makogon
Nov 12 at 18:25
Sorry for the delay, It was just a network issue and we were able to solve it. And we moved to blob storage to handle the file size and using SQL for storing any metadata.
– v.n
yesterday
Sorry for the delay, It was just a network issue and we were able to solve it. And we moved to blob storage to handle the file size and using SQL for storing any metadata.
– v.n
yesterday
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53234805%2fazure-cosmodb-site-unreachable%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
568 21uTX8t5f4j,t5ym
There's really no way to know why you can't reach the Azure portal. It could be related to your local network, your ISP, etc. In any event, that's not on-topic for Stack Overflow. As for the maximum document size: as documented, it's 2MB. You cannot increase it. But curious why you feel that you need more than 2MB per document. If you're storing arrays that can grow over time, with no limit, this is an "unbounded array" situation that runs the risk of breaking your model at some point.
– David Makogon
Nov 10 at 22:13
Thank you for responding, Since i posted this issue I was not able to reach portal at all, previously it was just intermittent. I am trying to use Cosmodb as my document db instead of Mongo and I have files that are larger than 2MB. So looking for a way to save the documents in cosmodb.
– v.n
Nov 12 at 15:56
I'd take a careful look at your document structure to determine why (or if) it really needs to be so large. For instance: Are you storing data that's never searched/indexed? Perfect for moving to alternate storage e.g. blob. Storing large trees of subdocuments? Perhaps split into separate top-level referenced documents. Storing an ever-growing array? Consider moving those array elements to separate documents. Same guidance as with working with MongoDB (except for different max-document limit).
– David Makogon
Nov 12 at 18:25
Sorry for the delay, It was just a network issue and we were able to solve it. And we moved to blob storage to handle the file size and using SQL for storing any metadata.
– v.n
yesterday