Is it 40% or 0.4%?
How to approximate rolls for potions of healing using only d6's?
What to do when being responsible for data protection in your lab, yet advice is ignored?
What's the difference between a cart and a wagon?
Most significant research articles for practical investors with research perspectives
Why proton concentration is divided by 10⁻⁷?
Make me a metasequence
Should I choose Itemized or Standard deduction?
Called into a meeting and told we are being made redundant (laid off) and "not to share outside". Can I tell my partner?
What is this waxed root vegetable?
Is there a low-level alternative to Animate Objects?
Find the next monthly expiration date
What is the wife of a henpecked husband called?
How to count occurrences of Friday 13th
Sometimes a banana is just a banana
Did 5.25" floppies undergo a change in magnetic coating?
How to deny access to SQL Server to certain login over SSMS, but allow over .Net SqlClient Data Provider
It took me a lot of time to make this, pls like. (YouTube Comments #1)
Book where the good guy lives backwards through time and the bad guy lives forward
Does music exist in Panem? And if so, what kinds of music?
What am I? I am in theaters and computer programs
"Murder!" The knight said
How do ISS astronauts "get their stripes"?
CBP Reminds Travelers to Allow 72 Hours for ESTA. Why?
Why does the author believe that the central mass that gas cloud HCN-0.009-0.044 orbits is smaller than our solar system?
Is it 40% or 0.4%?
$begingroup$
A variable, which should contain percents, also contains some "ratio" values, for example:
0.61
41
54
.4
.39
20
52
0.7
12
70
82
The real distribution parameters are unknown but I guess it is unimodal with most (say over 70% of) values occurring between 50% and 80%, but it is also possible to see very low values (e.g., 0.1%).
Is there any formal or systematic approaches to determine the likely format in which each value is recorded (i.e., ratio or percent), assuming no other variables are available?
data-cleaning
$endgroup$
|
show 1 more comment
$begingroup$
A variable, which should contain percents, also contains some "ratio" values, for example:
0.61
41
54
.4
.39
20
52
0.7
12
70
82
The real distribution parameters are unknown but I guess it is unimodal with most (say over 70% of) values occurring between 50% and 80%, but it is also possible to see very low values (e.g., 0.1%).
Is there any formal or systematic approaches to determine the likely format in which each value is recorded (i.e., ratio or percent), assuming no other variables are available?
data-cleaning
$endgroup$
1
$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
4 hours ago
$begingroup$
Read it again. It is not about asking strangers on the Internet to guess the data mean.
$endgroup$
– Orion
4 hours ago
2
$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
3 hours ago
$begingroup$
Oh, Ok. Correction: The question is not about asking strangers on the Internet what the data mean. Hooray.
$endgroup$
– Orion
3 hours ago
1
$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
3 hours ago
|
show 1 more comment
$begingroup$
A variable, which should contain percents, also contains some "ratio" values, for example:
0.61
41
54
.4
.39
20
52
0.7
12
70
82
The real distribution parameters are unknown but I guess it is unimodal with most (say over 70% of) values occurring between 50% and 80%, but it is also possible to see very low values (e.g., 0.1%).
Is there any formal or systematic approaches to determine the likely format in which each value is recorded (i.e., ratio or percent), assuming no other variables are available?
data-cleaning
$endgroup$
A variable, which should contain percents, also contains some "ratio" values, for example:
0.61
41
54
.4
.39
20
52
0.7
12
70
82
The real distribution parameters are unknown but I guess it is unimodal with most (say over 70% of) values occurring between 50% and 80%, but it is also possible to see very low values (e.g., 0.1%).
Is there any formal or systematic approaches to determine the likely format in which each value is recorded (i.e., ratio or percent), assuming no other variables are available?
data-cleaning
data-cleaning
edited 3 hours ago
Orion
asked 4 hours ago
OrionOrion
5311
5311
1
$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
4 hours ago
$begingroup$
Read it again. It is not about asking strangers on the Internet to guess the data mean.
$endgroup$
– Orion
4 hours ago
2
$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
3 hours ago
$begingroup$
Oh, Ok. Correction: The question is not about asking strangers on the Internet what the data mean. Hooray.
$endgroup$
– Orion
3 hours ago
1
$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
3 hours ago
|
show 1 more comment
1
$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
4 hours ago
$begingroup$
Read it again. It is not about asking strangers on the Internet to guess the data mean.
$endgroup$
– Orion
4 hours ago
2
$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
3 hours ago
$begingroup$
Oh, Ok. Correction: The question is not about asking strangers on the Internet what the data mean. Hooray.
$endgroup$
– Orion
3 hours ago
1
$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
3 hours ago
1
1
$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
4 hours ago
$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
4 hours ago
$begingroup$
Read it again. It is not about asking strangers on the Internet to guess the data mean.
$endgroup$
– Orion
4 hours ago
$begingroup$
Read it again. It is not about asking strangers on the Internet to guess the data mean.
$endgroup$
– Orion
4 hours ago
2
2
$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
3 hours ago
$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
3 hours ago
$begingroup$
Oh, Ok. Correction: The question is not about asking strangers on the Internet what the data mean. Hooray.
$endgroup$
– Orion
3 hours ago
$begingroup$
Oh, Ok. Correction: The question is not about asking strangers on the Internet what the data mean. Hooray.
$endgroup$
– Orion
3 hours ago
1
1
$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
3 hours ago
$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
3 hours ago
|
show 1 more comment
2 Answers
2
active
oldest
votes
$begingroup$
Assuming
- The only data you have is the percents/ratios (no other related explanatory variables)
- Your percents comes from a unimodel distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_{100}$).
- The percent/ratios are all between $0$ and $100$.
Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_{100}$ and everything over $K$ is more likely to be sampled from $P$.
You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.
Afterwards, find $K :=$ where $P$ and $P_{100}$ intersect and you can use that to clean your data.
In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.
$endgroup$
add a comment |
$begingroup$
Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.
Therefore, your data must be percents. You're welcome.
$endgroup$
3
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
3 hours ago
1
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
3 hours ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f395626%2fis-it-40-or-0-4%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Assuming
- The only data you have is the percents/ratios (no other related explanatory variables)
- Your percents comes from a unimodel distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_{100}$).
- The percent/ratios are all between $0$ and $100$.
Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_{100}$ and everything over $K$ is more likely to be sampled from $P$.
You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.
Afterwards, find $K :=$ where $P$ and $P_{100}$ intersect and you can use that to clean your data.
In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.
$endgroup$
add a comment |
$begingroup$
Assuming
- The only data you have is the percents/ratios (no other related explanatory variables)
- Your percents comes from a unimodel distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_{100}$).
- The percent/ratios are all between $0$ and $100$.
Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_{100}$ and everything over $K$ is more likely to be sampled from $P$.
You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.
Afterwards, find $K :=$ where $P$ and $P_{100}$ intersect and you can use that to clean your data.
In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.
$endgroup$
add a comment |
$begingroup$
Assuming
- The only data you have is the percents/ratios (no other related explanatory variables)
- Your percents comes from a unimodel distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_{100}$).
- The percent/ratios are all between $0$ and $100$.
Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_{100}$ and everything over $K$ is more likely to be sampled from $P$.
You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.
Afterwards, find $K :=$ where $P$ and $P_{100}$ intersect and you can use that to clean your data.
In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.
$endgroup$
Assuming
- The only data you have is the percents/ratios (no other related explanatory variables)
- Your percents comes from a unimodel distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_{100}$).
- The percent/ratios are all between $0$ and $100$.
Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_{100}$ and everything over $K$ is more likely to be sampled from $P$.
You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.
Afterwards, find $K :=$ where $P$ and $P_{100}$ intersect and you can use that to clean your data.
In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.
answered 2 hours ago
djmadjma
63947
63947
add a comment |
add a comment |
$begingroup$
Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.
Therefore, your data must be percents. You're welcome.
$endgroup$
3
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
3 hours ago
1
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
3 hours ago
add a comment |
$begingroup$
Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.
Therefore, your data must be percents. You're welcome.
$endgroup$
3
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
3 hours ago
1
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
3 hours ago
add a comment |
$begingroup$
Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.
Therefore, your data must be percents. You're welcome.
$endgroup$
Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.
Therefore, your data must be percents. You're welcome.
answered 3 hours ago
beta1_equals_beta2beta1_equals_beta2
412
412
3
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
3 hours ago
1
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
3 hours ago
add a comment |
3
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
3 hours ago
1
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
3 hours ago
3
3
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
3 hours ago
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
3 hours ago
1
1
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
3 hours ago
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
3 hours ago
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f395626%2fis-it-40-or-0-4%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
4 hours ago
$begingroup$
Read it again. It is not about asking strangers on the Internet to guess the data mean.
$endgroup$
– Orion
4 hours ago
2
$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
3 hours ago
$begingroup$
Oh, Ok. Correction: The question is not about asking strangers on the Internet what the data mean. Hooray.
$endgroup$
– Orion
3 hours ago
1
$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
3 hours ago