serial in postgres is being increased even though I added on conflict do nothing
I'm using Postgres 9.5 and seeing some wired things here.
I've a cron job running ever 5 mins firing a sql statement that is adding a list of records if not existing.
INSERT INTO
sometable (customer, balance)
VALUES
(:customer, :balance)
ON CONFLICT (customer) DO NOTHING
sometable.customer is a primary key (text)
sometable structure is:
id: serial
customer: text
balance: bigint
Now it seems like everytime this job runs, the id field is silently incremented +1. So next time, I really add a field, it is thousands of numbers above my last value. I thought this query checks for conflicts and if so, do nothing but currently it seems like it tries to insert the record, increased the id and then stops.
Any suggestions?
database postgresql
add a comment |
I'm using Postgres 9.5 and seeing some wired things here.
I've a cron job running ever 5 mins firing a sql statement that is adding a list of records if not existing.
INSERT INTO
sometable (customer, balance)
VALUES
(:customer, :balance)
ON CONFLICT (customer) DO NOTHING
sometable.customer is a primary key (text)
sometable structure is:
id: serial
customer: text
balance: bigint
Now it seems like everytime this job runs, the id field is silently incremented +1. So next time, I really add a field, it is thousands of numbers above my last value. I thought this query checks for conflicts and if so, do nothing but currently it seems like it tries to insert the record, increased the id and then stops.
Any suggestions?
database postgresql
4
This is by design: postgresql.nabble.com/… But you don't need to worry about those gaps, they are not a problem. The only thing a sequence guarantees is that it never generates the same number twice.
– a_horse_with_no_name
May 13 '16 at 8:21
1
If, for some reason, you actually need a gapless sequence (a rare requirement), then Postgres'sSERIAL
/SEQUENCE
support is not what you should be using. See this answer from Craig Ringer.
– IMSoP
May 13 '16 at 9:11
add a comment |
I'm using Postgres 9.5 and seeing some wired things here.
I've a cron job running ever 5 mins firing a sql statement that is adding a list of records if not existing.
INSERT INTO
sometable (customer, balance)
VALUES
(:customer, :balance)
ON CONFLICT (customer) DO NOTHING
sometable.customer is a primary key (text)
sometable structure is:
id: serial
customer: text
balance: bigint
Now it seems like everytime this job runs, the id field is silently incremented +1. So next time, I really add a field, it is thousands of numbers above my last value. I thought this query checks for conflicts and if so, do nothing but currently it seems like it tries to insert the record, increased the id and then stops.
Any suggestions?
database postgresql
I'm using Postgres 9.5 and seeing some wired things here.
I've a cron job running ever 5 mins firing a sql statement that is adding a list of records if not existing.
INSERT INTO
sometable (customer, balance)
VALUES
(:customer, :balance)
ON CONFLICT (customer) DO NOTHING
sometable.customer is a primary key (text)
sometable structure is:
id: serial
customer: text
balance: bigint
Now it seems like everytime this job runs, the id field is silently incremented +1. So next time, I really add a field, it is thousands of numbers above my last value. I thought this query checks for conflicts and if so, do nothing but currently it seems like it tries to insert the record, increased the id and then stops.
Any suggestions?
database postgresql
database postgresql
asked May 13 '16 at 8:15
ChristianChristian
2,60952858
2,60952858
4
This is by design: postgresql.nabble.com/… But you don't need to worry about those gaps, they are not a problem. The only thing a sequence guarantees is that it never generates the same number twice.
– a_horse_with_no_name
May 13 '16 at 8:21
1
If, for some reason, you actually need a gapless sequence (a rare requirement), then Postgres'sSERIAL
/SEQUENCE
support is not what you should be using. See this answer from Craig Ringer.
– IMSoP
May 13 '16 at 9:11
add a comment |
4
This is by design: postgresql.nabble.com/… But you don't need to worry about those gaps, they are not a problem. The only thing a sequence guarantees is that it never generates the same number twice.
– a_horse_with_no_name
May 13 '16 at 8:21
1
If, for some reason, you actually need a gapless sequence (a rare requirement), then Postgres'sSERIAL
/SEQUENCE
support is not what you should be using. See this answer from Craig Ringer.
– IMSoP
May 13 '16 at 9:11
4
4
This is by design: postgresql.nabble.com/… But you don't need to worry about those gaps, they are not a problem. The only thing a sequence guarantees is that it never generates the same number twice.
– a_horse_with_no_name
May 13 '16 at 8:21
This is by design: postgresql.nabble.com/… But you don't need to worry about those gaps, they are not a problem. The only thing a sequence guarantees is that it never generates the same number twice.
– a_horse_with_no_name
May 13 '16 at 8:21
1
1
If, for some reason, you actually need a gapless sequence (a rare requirement), then Postgres's
SERIAL
/SEQUENCE
support is not what you should be using. See this answer from Craig Ringer.– IMSoP
May 13 '16 at 9:11
If, for some reason, you actually need a gapless sequence (a rare requirement), then Postgres's
SERIAL
/SEQUENCE
support is not what you should be using. See this answer from Craig Ringer.– IMSoP
May 13 '16 at 9:11
add a comment |
4 Answers
4
active
oldest
votes
The reason this feels weird to you is that you are thinking of the increment on the counter as part of the insert operation, and therefore the "DO NOTHING" ought to mean "don't increment anything". You're picturing this:
- Check values to insert against constraint
- If duplicate detected, abort
- Increment sequence
- Insert data
But in fact, the increment has to happen before the insert is attempted. A SERIAL
column in Postgres is implemented as a DEFAULT
which executes the nextval()
function on a bound SEQUENCE
. Before the DBMS can do anything with the data, it's got to have a complete set of columns, so the order of operations is like this:
- Resolve default values, including incrementing the sequence
- Check values to insert against constraint
- If duplicate detected, abort
- Insert data
This can be seen intuitively if the duplicate key is in the autoincrement field itself:
CREATE TABLE foo ( id SERIAL NOT NULL PRIMARY KEY, bar text );
-- Insert row 1
INSERT INTO foo ( bar ) VALUES ( 'test' );
-- Reset the sequence
SELECT setval(pg_get_serial_sequence('foo', 'id'), 0, true);
-- Attempt to insert row 1 again
INSERT INTO foo ( bar ) VALUES ( 'test 2' )
ON CONFLICT (id) DO NOTHING;
Clearly, this can't know if there's a conflict without incrementing the sequence, so the "do nothing" has to come after that increment.
add a comment |
As already said by @a_horse_with_no_name and @Serge Ballesta serials are always incremented even if INSERT
fails.
You can try to "rollback" serial value to maximum id
used by changing the corresponding sequence:
SELECT setval('sometable_id_seq', MAX(id), true) FROM sometable;
1
Note that passingtrue
as the third parameter tosetval
is the same as not passing a third parameter.setval('sometable_id_seq', MAX(id), true)
is equivalent tosetval('sometable_id_seq', MAX(id))
– BrDaHa
May 24 '17 at 21:07
add a comment |
As said by @a_horse_with_no_name, that is by design. Serial type fields are implemented under the hood through sequences, and for evident reasons, once you have gotten a new value from a sequence, you cannot rollback the last value. Imagine the following scenario:
- sequence is at n
- A requires a new value : got n+1
- in a concurrent transaction B requires a new value: got n+2
- for any reason A rollbacks its transaction - would you feel safe to reset sequence?
That is the reason why sequences (and serial field) just document that in case of rollbacked transactions holes can occur in the returned values. Only unicity is guaranteed.
1
Yeah thanks I got that, but in my point of view a +1 doesn't make sense if do nothing is provided at the query. if should check before actually adding whether it needs to add anything. Also I am worried about the gaps. This job runs every 5 mins...
– Christian
May 13 '16 at 8:40
1
so only thing I can do is actually check by myself whether it's needed to add this record by selecting all of them :-/
– Christian
May 13 '16 at 8:41
1
@Christian: there is nothing you need to do. Just ignore the gaps. There is nothing wrong with ID values having (large gaps)
– a_horse_with_no_name
May 13 '16 at 8:43
2
@a_horse_with_no_name Unless you are inserting enough data to overflow aSERIAL
in a "short" (purposefully vague) amount of time. I was going to do this for a query that runs ~50 times/second, but I would run out of unique values in roughly 1.36 years. I wish instead of being forced to useBIGSERIAL
there was a command likeSELECT id FROM table WHERE uniqueCol = 'val' ON EMPTY INSERT INTO table (uniqueCol) VALUES ('val') ON CONFLICT(uniqueCol) DO UPDATE SET uniqueCol = EXCLUDED.uniqueCol RETURNING id
. That would significantly reduce the number of wasted id's.
– Jeff G
Oct 11 '16 at 0:12
add a comment |
Well there is technique that allows you to do stuff like that. They call insert mutex. It is old old old, but it works.
https://www.percona.com/blog/2011/11/29/avoiding-auto-increment-holes-on-innodb-with-insert-ignore/
Generally idea is that you do INSERT SELECT
and if your values are duplicating the SELECT
does not return any results that of course prevents INSERT
and the index is not incremented. Bit of mind boggling, but perfectly valid and performant.
This of course completely ignores ON DUPLICATE
but one gets back control over the index.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f37204749%2fserial-in-postgres-is-being-increased-even-though-i-added-on-conflict-do-nothing%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
The reason this feels weird to you is that you are thinking of the increment on the counter as part of the insert operation, and therefore the "DO NOTHING" ought to mean "don't increment anything". You're picturing this:
- Check values to insert against constraint
- If duplicate detected, abort
- Increment sequence
- Insert data
But in fact, the increment has to happen before the insert is attempted. A SERIAL
column in Postgres is implemented as a DEFAULT
which executes the nextval()
function on a bound SEQUENCE
. Before the DBMS can do anything with the data, it's got to have a complete set of columns, so the order of operations is like this:
- Resolve default values, including incrementing the sequence
- Check values to insert against constraint
- If duplicate detected, abort
- Insert data
This can be seen intuitively if the duplicate key is in the autoincrement field itself:
CREATE TABLE foo ( id SERIAL NOT NULL PRIMARY KEY, bar text );
-- Insert row 1
INSERT INTO foo ( bar ) VALUES ( 'test' );
-- Reset the sequence
SELECT setval(pg_get_serial_sequence('foo', 'id'), 0, true);
-- Attempt to insert row 1 again
INSERT INTO foo ( bar ) VALUES ( 'test 2' )
ON CONFLICT (id) DO NOTHING;
Clearly, this can't know if there's a conflict without incrementing the sequence, so the "do nothing" has to come after that increment.
add a comment |
The reason this feels weird to you is that you are thinking of the increment on the counter as part of the insert operation, and therefore the "DO NOTHING" ought to mean "don't increment anything". You're picturing this:
- Check values to insert against constraint
- If duplicate detected, abort
- Increment sequence
- Insert data
But in fact, the increment has to happen before the insert is attempted. A SERIAL
column in Postgres is implemented as a DEFAULT
which executes the nextval()
function on a bound SEQUENCE
. Before the DBMS can do anything with the data, it's got to have a complete set of columns, so the order of operations is like this:
- Resolve default values, including incrementing the sequence
- Check values to insert against constraint
- If duplicate detected, abort
- Insert data
This can be seen intuitively if the duplicate key is in the autoincrement field itself:
CREATE TABLE foo ( id SERIAL NOT NULL PRIMARY KEY, bar text );
-- Insert row 1
INSERT INTO foo ( bar ) VALUES ( 'test' );
-- Reset the sequence
SELECT setval(pg_get_serial_sequence('foo', 'id'), 0, true);
-- Attempt to insert row 1 again
INSERT INTO foo ( bar ) VALUES ( 'test 2' )
ON CONFLICT (id) DO NOTHING;
Clearly, this can't know if there's a conflict without incrementing the sequence, so the "do nothing" has to come after that increment.
add a comment |
The reason this feels weird to you is that you are thinking of the increment on the counter as part of the insert operation, and therefore the "DO NOTHING" ought to mean "don't increment anything". You're picturing this:
- Check values to insert against constraint
- If duplicate detected, abort
- Increment sequence
- Insert data
But in fact, the increment has to happen before the insert is attempted. A SERIAL
column in Postgres is implemented as a DEFAULT
which executes the nextval()
function on a bound SEQUENCE
. Before the DBMS can do anything with the data, it's got to have a complete set of columns, so the order of operations is like this:
- Resolve default values, including incrementing the sequence
- Check values to insert against constraint
- If duplicate detected, abort
- Insert data
This can be seen intuitively if the duplicate key is in the autoincrement field itself:
CREATE TABLE foo ( id SERIAL NOT NULL PRIMARY KEY, bar text );
-- Insert row 1
INSERT INTO foo ( bar ) VALUES ( 'test' );
-- Reset the sequence
SELECT setval(pg_get_serial_sequence('foo', 'id'), 0, true);
-- Attempt to insert row 1 again
INSERT INTO foo ( bar ) VALUES ( 'test 2' )
ON CONFLICT (id) DO NOTHING;
Clearly, this can't know if there's a conflict without incrementing the sequence, so the "do nothing" has to come after that increment.
The reason this feels weird to you is that you are thinking of the increment on the counter as part of the insert operation, and therefore the "DO NOTHING" ought to mean "don't increment anything". You're picturing this:
- Check values to insert against constraint
- If duplicate detected, abort
- Increment sequence
- Insert data
But in fact, the increment has to happen before the insert is attempted. A SERIAL
column in Postgres is implemented as a DEFAULT
which executes the nextval()
function on a bound SEQUENCE
. Before the DBMS can do anything with the data, it's got to have a complete set of columns, so the order of operations is like this:
- Resolve default values, including incrementing the sequence
- Check values to insert against constraint
- If duplicate detected, abort
- Insert data
This can be seen intuitively if the duplicate key is in the autoincrement field itself:
CREATE TABLE foo ( id SERIAL NOT NULL PRIMARY KEY, bar text );
-- Insert row 1
INSERT INTO foo ( bar ) VALUES ( 'test' );
-- Reset the sequence
SELECT setval(pg_get_serial_sequence('foo', 'id'), 0, true);
-- Attempt to insert row 1 again
INSERT INTO foo ( bar ) VALUES ( 'test 2' )
ON CONFLICT (id) DO NOTHING;
Clearly, this can't know if there's a conflict without incrementing the sequence, so the "do nothing" has to come after that increment.
answered May 13 '16 at 9:24
IMSoPIMSoP
46k65693
46k65693
add a comment |
add a comment |
As already said by @a_horse_with_no_name and @Serge Ballesta serials are always incremented even if INSERT
fails.
You can try to "rollback" serial value to maximum id
used by changing the corresponding sequence:
SELECT setval('sometable_id_seq', MAX(id), true) FROM sometable;
1
Note that passingtrue
as the third parameter tosetval
is the same as not passing a third parameter.setval('sometable_id_seq', MAX(id), true)
is equivalent tosetval('sometable_id_seq', MAX(id))
– BrDaHa
May 24 '17 at 21:07
add a comment |
As already said by @a_horse_with_no_name and @Serge Ballesta serials are always incremented even if INSERT
fails.
You can try to "rollback" serial value to maximum id
used by changing the corresponding sequence:
SELECT setval('sometable_id_seq', MAX(id), true) FROM sometable;
1
Note that passingtrue
as the third parameter tosetval
is the same as not passing a third parameter.setval('sometable_id_seq', MAX(id), true)
is equivalent tosetval('sometable_id_seq', MAX(id))
– BrDaHa
May 24 '17 at 21:07
add a comment |
As already said by @a_horse_with_no_name and @Serge Ballesta serials are always incremented even if INSERT
fails.
You can try to "rollback" serial value to maximum id
used by changing the corresponding sequence:
SELECT setval('sometable_id_seq', MAX(id), true) FROM sometable;
As already said by @a_horse_with_no_name and @Serge Ballesta serials are always incremented even if INSERT
fails.
You can try to "rollback" serial value to maximum id
used by changing the corresponding sequence:
SELECT setval('sometable_id_seq', MAX(id), true) FROM sometable;
answered May 13 '16 at 8:39
AdamAdam
3,83262030
3,83262030
1
Note that passingtrue
as the third parameter tosetval
is the same as not passing a third parameter.setval('sometable_id_seq', MAX(id), true)
is equivalent tosetval('sometable_id_seq', MAX(id))
– BrDaHa
May 24 '17 at 21:07
add a comment |
1
Note that passingtrue
as the third parameter tosetval
is the same as not passing a third parameter.setval('sometable_id_seq', MAX(id), true)
is equivalent tosetval('sometable_id_seq', MAX(id))
– BrDaHa
May 24 '17 at 21:07
1
1
Note that passing
true
as the third parameter to setval
is the same as not passing a third parameter. setval('sometable_id_seq', MAX(id), true)
is equivalent to setval('sometable_id_seq', MAX(id))
– BrDaHa
May 24 '17 at 21:07
Note that passing
true
as the third parameter to setval
is the same as not passing a third parameter. setval('sometable_id_seq', MAX(id), true)
is equivalent to setval('sometable_id_seq', MAX(id))
– BrDaHa
May 24 '17 at 21:07
add a comment |
As said by @a_horse_with_no_name, that is by design. Serial type fields are implemented under the hood through sequences, and for evident reasons, once you have gotten a new value from a sequence, you cannot rollback the last value. Imagine the following scenario:
- sequence is at n
- A requires a new value : got n+1
- in a concurrent transaction B requires a new value: got n+2
- for any reason A rollbacks its transaction - would you feel safe to reset sequence?
That is the reason why sequences (and serial field) just document that in case of rollbacked transactions holes can occur in the returned values. Only unicity is guaranteed.
1
Yeah thanks I got that, but in my point of view a +1 doesn't make sense if do nothing is provided at the query. if should check before actually adding whether it needs to add anything. Also I am worried about the gaps. This job runs every 5 mins...
– Christian
May 13 '16 at 8:40
1
so only thing I can do is actually check by myself whether it's needed to add this record by selecting all of them :-/
– Christian
May 13 '16 at 8:41
1
@Christian: there is nothing you need to do. Just ignore the gaps. There is nothing wrong with ID values having (large gaps)
– a_horse_with_no_name
May 13 '16 at 8:43
2
@a_horse_with_no_name Unless you are inserting enough data to overflow aSERIAL
in a "short" (purposefully vague) amount of time. I was going to do this for a query that runs ~50 times/second, but I would run out of unique values in roughly 1.36 years. I wish instead of being forced to useBIGSERIAL
there was a command likeSELECT id FROM table WHERE uniqueCol = 'val' ON EMPTY INSERT INTO table (uniqueCol) VALUES ('val') ON CONFLICT(uniqueCol) DO UPDATE SET uniqueCol = EXCLUDED.uniqueCol RETURNING id
. That would significantly reduce the number of wasted id's.
– Jeff G
Oct 11 '16 at 0:12
add a comment |
As said by @a_horse_with_no_name, that is by design. Serial type fields are implemented under the hood through sequences, and for evident reasons, once you have gotten a new value from a sequence, you cannot rollback the last value. Imagine the following scenario:
- sequence is at n
- A requires a new value : got n+1
- in a concurrent transaction B requires a new value: got n+2
- for any reason A rollbacks its transaction - would you feel safe to reset sequence?
That is the reason why sequences (and serial field) just document that in case of rollbacked transactions holes can occur in the returned values. Only unicity is guaranteed.
1
Yeah thanks I got that, but in my point of view a +1 doesn't make sense if do nothing is provided at the query. if should check before actually adding whether it needs to add anything. Also I am worried about the gaps. This job runs every 5 mins...
– Christian
May 13 '16 at 8:40
1
so only thing I can do is actually check by myself whether it's needed to add this record by selecting all of them :-/
– Christian
May 13 '16 at 8:41
1
@Christian: there is nothing you need to do. Just ignore the gaps. There is nothing wrong with ID values having (large gaps)
– a_horse_with_no_name
May 13 '16 at 8:43
2
@a_horse_with_no_name Unless you are inserting enough data to overflow aSERIAL
in a "short" (purposefully vague) amount of time. I was going to do this for a query that runs ~50 times/second, but I would run out of unique values in roughly 1.36 years. I wish instead of being forced to useBIGSERIAL
there was a command likeSELECT id FROM table WHERE uniqueCol = 'val' ON EMPTY INSERT INTO table (uniqueCol) VALUES ('val') ON CONFLICT(uniqueCol) DO UPDATE SET uniqueCol = EXCLUDED.uniqueCol RETURNING id
. That would significantly reduce the number of wasted id's.
– Jeff G
Oct 11 '16 at 0:12
add a comment |
As said by @a_horse_with_no_name, that is by design. Serial type fields are implemented under the hood through sequences, and for evident reasons, once you have gotten a new value from a sequence, you cannot rollback the last value. Imagine the following scenario:
- sequence is at n
- A requires a new value : got n+1
- in a concurrent transaction B requires a new value: got n+2
- for any reason A rollbacks its transaction - would you feel safe to reset sequence?
That is the reason why sequences (and serial field) just document that in case of rollbacked transactions holes can occur in the returned values. Only unicity is guaranteed.
As said by @a_horse_with_no_name, that is by design. Serial type fields are implemented under the hood through sequences, and for evident reasons, once you have gotten a new value from a sequence, you cannot rollback the last value. Imagine the following scenario:
- sequence is at n
- A requires a new value : got n+1
- in a concurrent transaction B requires a new value: got n+2
- for any reason A rollbacks its transaction - would you feel safe to reset sequence?
That is the reason why sequences (and serial field) just document that in case of rollbacked transactions holes can occur in the returned values. Only unicity is guaranteed.
answered May 13 '16 at 8:28
Serge BallestaSerge Ballesta
76.4k956130
76.4k956130
1
Yeah thanks I got that, but in my point of view a +1 doesn't make sense if do nothing is provided at the query. if should check before actually adding whether it needs to add anything. Also I am worried about the gaps. This job runs every 5 mins...
– Christian
May 13 '16 at 8:40
1
so only thing I can do is actually check by myself whether it's needed to add this record by selecting all of them :-/
– Christian
May 13 '16 at 8:41
1
@Christian: there is nothing you need to do. Just ignore the gaps. There is nothing wrong with ID values having (large gaps)
– a_horse_with_no_name
May 13 '16 at 8:43
2
@a_horse_with_no_name Unless you are inserting enough data to overflow aSERIAL
in a "short" (purposefully vague) amount of time. I was going to do this for a query that runs ~50 times/second, but I would run out of unique values in roughly 1.36 years. I wish instead of being forced to useBIGSERIAL
there was a command likeSELECT id FROM table WHERE uniqueCol = 'val' ON EMPTY INSERT INTO table (uniqueCol) VALUES ('val') ON CONFLICT(uniqueCol) DO UPDATE SET uniqueCol = EXCLUDED.uniqueCol RETURNING id
. That would significantly reduce the number of wasted id's.
– Jeff G
Oct 11 '16 at 0:12
add a comment |
1
Yeah thanks I got that, but in my point of view a +1 doesn't make sense if do nothing is provided at the query. if should check before actually adding whether it needs to add anything. Also I am worried about the gaps. This job runs every 5 mins...
– Christian
May 13 '16 at 8:40
1
so only thing I can do is actually check by myself whether it's needed to add this record by selecting all of them :-/
– Christian
May 13 '16 at 8:41
1
@Christian: there is nothing you need to do. Just ignore the gaps. There is nothing wrong with ID values having (large gaps)
– a_horse_with_no_name
May 13 '16 at 8:43
2
@a_horse_with_no_name Unless you are inserting enough data to overflow aSERIAL
in a "short" (purposefully vague) amount of time. I was going to do this for a query that runs ~50 times/second, but I would run out of unique values in roughly 1.36 years. I wish instead of being forced to useBIGSERIAL
there was a command likeSELECT id FROM table WHERE uniqueCol = 'val' ON EMPTY INSERT INTO table (uniqueCol) VALUES ('val') ON CONFLICT(uniqueCol) DO UPDATE SET uniqueCol = EXCLUDED.uniqueCol RETURNING id
. That would significantly reduce the number of wasted id's.
– Jeff G
Oct 11 '16 at 0:12
1
1
Yeah thanks I got that, but in my point of view a +1 doesn't make sense if do nothing is provided at the query. if should check before actually adding whether it needs to add anything. Also I am worried about the gaps. This job runs every 5 mins...
– Christian
May 13 '16 at 8:40
Yeah thanks I got that, but in my point of view a +1 doesn't make sense if do nothing is provided at the query. if should check before actually adding whether it needs to add anything. Also I am worried about the gaps. This job runs every 5 mins...
– Christian
May 13 '16 at 8:40
1
1
so only thing I can do is actually check by myself whether it's needed to add this record by selecting all of them :-/
– Christian
May 13 '16 at 8:41
so only thing I can do is actually check by myself whether it's needed to add this record by selecting all of them :-/
– Christian
May 13 '16 at 8:41
1
1
@Christian: there is nothing you need to do. Just ignore the gaps. There is nothing wrong with ID values having (large gaps)
– a_horse_with_no_name
May 13 '16 at 8:43
@Christian: there is nothing you need to do. Just ignore the gaps. There is nothing wrong with ID values having (large gaps)
– a_horse_with_no_name
May 13 '16 at 8:43
2
2
@a_horse_with_no_name Unless you are inserting enough data to overflow a
SERIAL
in a "short" (purposefully vague) amount of time. I was going to do this for a query that runs ~50 times/second, but I would run out of unique values in roughly 1.36 years. I wish instead of being forced to use BIGSERIAL
there was a command like SELECT id FROM table WHERE uniqueCol = 'val' ON EMPTY INSERT INTO table (uniqueCol) VALUES ('val') ON CONFLICT(uniqueCol) DO UPDATE SET uniqueCol = EXCLUDED.uniqueCol RETURNING id
. That would significantly reduce the number of wasted id's.– Jeff G
Oct 11 '16 at 0:12
@a_horse_with_no_name Unless you are inserting enough data to overflow a
SERIAL
in a "short" (purposefully vague) amount of time. I was going to do this for a query that runs ~50 times/second, but I would run out of unique values in roughly 1.36 years. I wish instead of being forced to use BIGSERIAL
there was a command like SELECT id FROM table WHERE uniqueCol = 'val' ON EMPTY INSERT INTO table (uniqueCol) VALUES ('val') ON CONFLICT(uniqueCol) DO UPDATE SET uniqueCol = EXCLUDED.uniqueCol RETURNING id
. That would significantly reduce the number of wasted id's.– Jeff G
Oct 11 '16 at 0:12
add a comment |
Well there is technique that allows you to do stuff like that. They call insert mutex. It is old old old, but it works.
https://www.percona.com/blog/2011/11/29/avoiding-auto-increment-holes-on-innodb-with-insert-ignore/
Generally idea is that you do INSERT SELECT
and if your values are duplicating the SELECT
does not return any results that of course prevents INSERT
and the index is not incremented. Bit of mind boggling, but perfectly valid and performant.
This of course completely ignores ON DUPLICATE
but one gets back control over the index.
add a comment |
Well there is technique that allows you to do stuff like that. They call insert mutex. It is old old old, but it works.
https://www.percona.com/blog/2011/11/29/avoiding-auto-increment-holes-on-innodb-with-insert-ignore/
Generally idea is that you do INSERT SELECT
and if your values are duplicating the SELECT
does not return any results that of course prevents INSERT
and the index is not incremented. Bit of mind boggling, but perfectly valid and performant.
This of course completely ignores ON DUPLICATE
but one gets back control over the index.
add a comment |
Well there is technique that allows you to do stuff like that. They call insert mutex. It is old old old, but it works.
https://www.percona.com/blog/2011/11/29/avoiding-auto-increment-holes-on-innodb-with-insert-ignore/
Generally idea is that you do INSERT SELECT
and if your values are duplicating the SELECT
does not return any results that of course prevents INSERT
and the index is not incremented. Bit of mind boggling, but perfectly valid and performant.
This of course completely ignores ON DUPLICATE
but one gets back control over the index.
Well there is technique that allows you to do stuff like that. They call insert mutex. It is old old old, but it works.
https://www.percona.com/blog/2011/11/29/avoiding-auto-increment-holes-on-innodb-with-insert-ignore/
Generally idea is that you do INSERT SELECT
and if your values are duplicating the SELECT
does not return any results that of course prevents INSERT
and the index is not incremented. Bit of mind boggling, but perfectly valid and performant.
This of course completely ignores ON DUPLICATE
but one gets back control over the index.
edited Jul 26 '18 at 16:29
answered Jul 26 '18 at 16:24
sp3c1sp3c1
45045
45045
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f37204749%2fserial-in-postgres-is-being-increased-even-though-i-added-on-conflict-do-nothing%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
4
This is by design: postgresql.nabble.com/… But you don't need to worry about those gaps, they are not a problem. The only thing a sequence guarantees is that it never generates the same number twice.
– a_horse_with_no_name
May 13 '16 at 8:21
1
If, for some reason, you actually need a gapless sequence (a rare requirement), then Postgres's
SERIAL
/SEQUENCE
support is not what you should be using. See this answer from Craig Ringer.– IMSoP
May 13 '16 at 9:11