Solr / Lucene - prefix query on strings that end in numeric value
I'm noticing some behaviour that I can't explain in solr (version 7.5). I have two documents that each contain a field with a full path to a file.
doc1:
path: ["/home/kyle/filea.txt"]
,
doc2:
path: ["/home/kyle/file1.txt"]
- If I issue a query:
path:filea.*
,doc1
is correctly returned. - If I issue a query:
path:file1*
,doc2
is correctly returned. - If I issue a query:
path:"file1.*"
,doc2
is correctly returned. - If I issue a query:
path:file1.*
,doc2
is NOT returned.
I have the default TokenizerChain on the Index Analyzer and the Query Analyzer, and the field is multi-valued.
So my question: What is solr/lucene doing behind the scenes that causes the query for:
<string><number>.*
to not return the document I expect, when the other generic cases of:
<string>.*
(no trailing number),<string><number>*
(no dot in query)"<string><number>.*"
(query in quotes)
all return what I think they should?
solr lucene
add a comment |
I'm noticing some behaviour that I can't explain in solr (version 7.5). I have two documents that each contain a field with a full path to a file.
doc1:
path: ["/home/kyle/filea.txt"]
,
doc2:
path: ["/home/kyle/file1.txt"]
- If I issue a query:
path:filea.*
,doc1
is correctly returned. - If I issue a query:
path:file1*
,doc2
is correctly returned. - If I issue a query:
path:"file1.*"
,doc2
is correctly returned. - If I issue a query:
path:file1.*
,doc2
is NOT returned.
I have the default TokenizerChain on the Index Analyzer and the Query Analyzer, and the field is multi-valued.
So my question: What is solr/lucene doing behind the scenes that causes the query for:
<string><number>.*
to not return the document I expect, when the other generic cases of:
<string>.*
(no trailing number),<string><number>*
(no dot in query)"<string><number>.*"
(query in quotes)
all return what I think they should?
solr lucene
add a comment |
I'm noticing some behaviour that I can't explain in solr (version 7.5). I have two documents that each contain a field with a full path to a file.
doc1:
path: ["/home/kyle/filea.txt"]
,
doc2:
path: ["/home/kyle/file1.txt"]
- If I issue a query:
path:filea.*
,doc1
is correctly returned. - If I issue a query:
path:file1*
,doc2
is correctly returned. - If I issue a query:
path:"file1.*"
,doc2
is correctly returned. - If I issue a query:
path:file1.*
,doc2
is NOT returned.
I have the default TokenizerChain on the Index Analyzer and the Query Analyzer, and the field is multi-valued.
So my question: What is solr/lucene doing behind the scenes that causes the query for:
<string><number>.*
to not return the document I expect, when the other generic cases of:
<string>.*
(no trailing number),<string><number>*
(no dot in query)"<string><number>.*"
(query in quotes)
all return what I think they should?
solr lucene
I'm noticing some behaviour that I can't explain in solr (version 7.5). I have two documents that each contain a field with a full path to a file.
doc1:
path: ["/home/kyle/filea.txt"]
,
doc2:
path: ["/home/kyle/file1.txt"]
- If I issue a query:
path:filea.*
,doc1
is correctly returned. - If I issue a query:
path:file1*
,doc2
is correctly returned. - If I issue a query:
path:"file1.*"
,doc2
is correctly returned. - If I issue a query:
path:file1.*
,doc2
is NOT returned.
I have the default TokenizerChain on the Index Analyzer and the Query Analyzer, and the field is multi-valued.
So my question: What is solr/lucene doing behind the scenes that causes the query for:
<string><number>.*
to not return the document I expect, when the other generic cases of:
<string>.*
(no trailing number),<string><number>*
(no dot in query)"<string><number>.*"
(query in quotes)
all return what I think they should?
solr lucene
solr lucene
asked Nov 13 '18 at 23:07
Kyle FranshamKyle Fransham
1,2041221
1,2041221
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
Your analyzer splits strings into tokens based on the the rules specified in UAX#29. The rules that are of interest here are WB6 - WB12. It will not split groups of letters at a period (ex. an abbreviation), or groups of digits (ex. a decimal number), but a letter followed by digit (or vice versa) will be split.
That is:
- "one.two" becomes one token: "one.two". In doc1, you get the token: "filea.txt"
- "1.2" becomes one token: "1.2"
- "one.2" becomes two tokens: "one" and "2". In doc2, you get the tokens: "file1" and "txt"
- "1.two" becomes two tokens: "1" and "two"
The other thing to understand, is that wildcard queries are not tokenized, so they will not find patterns that would, after analysis, would span two tokens or, in this case, characters that would be eliminated in tokenization.
So, your queries:
path:filea.*
looks for "filea." as a prefix. It finds it because "filea.txt" is a token present in the index.path:file1*
looks for "file1" as a prefix. It finds it because ""file1" is a token in the index.path:"file1.*"
is a phrase query, and there are no wildcards in phrase queries. So, "file1.*" gets passed through analysis, which eliminates the punctuation, and becomes "file1", which it finds in the index.path:file1.*
looks for "file1." as a prefix. "file1" and "txt" are in the index, but "file1." is not, so it doesn't find anything.
Great, thorough answer. Thank you!
– Kyle Fransham
Nov 14 '18 at 13:33
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53290844%2fsolr-lucene-prefix-query-on-strings-that-end-in-numeric-value%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Your analyzer splits strings into tokens based on the the rules specified in UAX#29. The rules that are of interest here are WB6 - WB12. It will not split groups of letters at a period (ex. an abbreviation), or groups of digits (ex. a decimal number), but a letter followed by digit (or vice versa) will be split.
That is:
- "one.two" becomes one token: "one.two". In doc1, you get the token: "filea.txt"
- "1.2" becomes one token: "1.2"
- "one.2" becomes two tokens: "one" and "2". In doc2, you get the tokens: "file1" and "txt"
- "1.two" becomes two tokens: "1" and "two"
The other thing to understand, is that wildcard queries are not tokenized, so they will not find patterns that would, after analysis, would span two tokens or, in this case, characters that would be eliminated in tokenization.
So, your queries:
path:filea.*
looks for "filea." as a prefix. It finds it because "filea.txt" is a token present in the index.path:file1*
looks for "file1" as a prefix. It finds it because ""file1" is a token in the index.path:"file1.*"
is a phrase query, and there are no wildcards in phrase queries. So, "file1.*" gets passed through analysis, which eliminates the punctuation, and becomes "file1", which it finds in the index.path:file1.*
looks for "file1." as a prefix. "file1" and "txt" are in the index, but "file1." is not, so it doesn't find anything.
Great, thorough answer. Thank you!
– Kyle Fransham
Nov 14 '18 at 13:33
add a comment |
Your analyzer splits strings into tokens based on the the rules specified in UAX#29. The rules that are of interest here are WB6 - WB12. It will not split groups of letters at a period (ex. an abbreviation), or groups of digits (ex. a decimal number), but a letter followed by digit (or vice versa) will be split.
That is:
- "one.two" becomes one token: "one.two". In doc1, you get the token: "filea.txt"
- "1.2" becomes one token: "1.2"
- "one.2" becomes two tokens: "one" and "2". In doc2, you get the tokens: "file1" and "txt"
- "1.two" becomes two tokens: "1" and "two"
The other thing to understand, is that wildcard queries are not tokenized, so they will not find patterns that would, after analysis, would span two tokens or, in this case, characters that would be eliminated in tokenization.
So, your queries:
path:filea.*
looks for "filea." as a prefix. It finds it because "filea.txt" is a token present in the index.path:file1*
looks for "file1" as a prefix. It finds it because ""file1" is a token in the index.path:"file1.*"
is a phrase query, and there are no wildcards in phrase queries. So, "file1.*" gets passed through analysis, which eliminates the punctuation, and becomes "file1", which it finds in the index.path:file1.*
looks for "file1." as a prefix. "file1" and "txt" are in the index, but "file1." is not, so it doesn't find anything.
Great, thorough answer. Thank you!
– Kyle Fransham
Nov 14 '18 at 13:33
add a comment |
Your analyzer splits strings into tokens based on the the rules specified in UAX#29. The rules that are of interest here are WB6 - WB12. It will not split groups of letters at a period (ex. an abbreviation), or groups of digits (ex. a decimal number), but a letter followed by digit (or vice versa) will be split.
That is:
- "one.two" becomes one token: "one.two". In doc1, you get the token: "filea.txt"
- "1.2" becomes one token: "1.2"
- "one.2" becomes two tokens: "one" and "2". In doc2, you get the tokens: "file1" and "txt"
- "1.two" becomes two tokens: "1" and "two"
The other thing to understand, is that wildcard queries are not tokenized, so they will not find patterns that would, after analysis, would span two tokens or, in this case, characters that would be eliminated in tokenization.
So, your queries:
path:filea.*
looks for "filea." as a prefix. It finds it because "filea.txt" is a token present in the index.path:file1*
looks for "file1" as a prefix. It finds it because ""file1" is a token in the index.path:"file1.*"
is a phrase query, and there are no wildcards in phrase queries. So, "file1.*" gets passed through analysis, which eliminates the punctuation, and becomes "file1", which it finds in the index.path:file1.*
looks for "file1." as a prefix. "file1" and "txt" are in the index, but "file1." is not, so it doesn't find anything.
Your analyzer splits strings into tokens based on the the rules specified in UAX#29. The rules that are of interest here are WB6 - WB12. It will not split groups of letters at a period (ex. an abbreviation), or groups of digits (ex. a decimal number), but a letter followed by digit (or vice versa) will be split.
That is:
- "one.two" becomes one token: "one.two". In doc1, you get the token: "filea.txt"
- "1.2" becomes one token: "1.2"
- "one.2" becomes two tokens: "one" and "2". In doc2, you get the tokens: "file1" and "txt"
- "1.two" becomes two tokens: "1" and "two"
The other thing to understand, is that wildcard queries are not tokenized, so they will not find patterns that would, after analysis, would span two tokens or, in this case, characters that would be eliminated in tokenization.
So, your queries:
path:filea.*
looks for "filea." as a prefix. It finds it because "filea.txt" is a token present in the index.path:file1*
looks for "file1" as a prefix. It finds it because ""file1" is a token in the index.path:"file1.*"
is a phrase query, and there are no wildcards in phrase queries. So, "file1.*" gets passed through analysis, which eliminates the punctuation, and becomes "file1", which it finds in the index.path:file1.*
looks for "file1." as a prefix. "file1" and "txt" are in the index, but "file1." is not, so it doesn't find anything.
edited Nov 13 '18 at 23:59
answered Nov 13 '18 at 23:52
femtoRgonfemtoRgon
28.2k64271
28.2k64271
Great, thorough answer. Thank you!
– Kyle Fransham
Nov 14 '18 at 13:33
add a comment |
Great, thorough answer. Thank you!
– Kyle Fransham
Nov 14 '18 at 13:33
Great, thorough answer. Thank you!
– Kyle Fransham
Nov 14 '18 at 13:33
Great, thorough answer. Thank you!
– Kyle Fransham
Nov 14 '18 at 13:33
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53290844%2fsolr-lucene-prefix-query-on-strings-that-end-in-numeric-value%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown