What is ground truth: home meter or lab venipuncture?

Valve Replacement Forums

Help Support Valve Replacement Forums:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Protimenow - Along with the others, I'm so sorry about your recent stroke, and I hope you recover quickly and completely.

PEM - I've only been home testing with my Inratio2 meter for about 5 months, and have had only one occasion where I have both a meter test and a lab test taken from blood drawn within an hour or two of the meter home test. On that occasion, the home test result was 3.4 and the lab test result was 2.8
I just assumed the truth was somewhere in the middle, around 3.1, or that the lab test was probably wrong since the blood wasn't actually tested until the next day after shipment from the doctor's office to the lab.

But, your formula of home-test-raised-to-the-power-of-0.85 works perfectly for the 3.4/2.8 home/lab difference on the one data point I can add from my Inratio2 meter.

This, along with Protimenow's experience makes me a bit nervous too. If your formula is true for my Inratio2 meter as well, then to stay within a lab-draw reading range of 2.5-3.5, I should strive for a home-test range of 2.9-4.4
I have no problem with the 2.9 side, but if my home reading is 4.4, I usually consider that well out of range and adjust my warfarin accordingly.
To be within range under both test types, it would appear I should strive for a home test range of 2.9 to 3.5 on my home meter. From my history so far, this tighter home-test target range will be difficult, if not impossible for me to achieve.

I would also be interested in hearing from others as to whether their Inratio2 meters show the same offset from lab-draw INR tests.

Another test I plan to perform when I do my next home test, is to make two different tests with my meter within 1/2 hour of each other just to test the repeatability of the meter itself with no other variables (same test strip lot number).

When I get a new batch of test strips, and have two with different lot numbers, I'll check the repeatability again by running two tests within 1/2 hour of each other using strips from different lots.

I'll post with my results after these tests, but I would be interested to know if anyone has already done such tests and what their results were.

---------------------

UPDATE to Post:
I just found the following from the FDA website after a bit of googling:
http://www.accessdata.fda.gov/cdrh_docs/reviews/K092987.pdf

Although there is no date on the report, it does make for some interesting reading.

In the "analytical performance" part of the report, because Biosite/Alere is using the venous blood draw as the basis for the purported accuracy of the meter & test strip readings, it would seem that "Ground Truth" would be the venous blood draw method test results.

The other item that caught my eye is that there can be a +/- 0.5 INR offset, with a +/- 30% relative bias of the test strips within the therapeutic range for INR values of 2.0 to 4.5. I am making an assumption that the absolute offset and relative bias of the test strips is what is calibrated out when the test strip lot code number is entered into the Inratio2 meter for a given manufacturing lot.

The report seems to say that there is a residual overall accuracy level of about +/- 8.5% So, for a nominal reading of INR=3.0, this gives an allowed uncertainty of +/- 0.25 on the INR reading.

From PEM's results, and my own single point comparison with a blood draw test, it would seem that these accuracy results may not always be met. Perhaps some other factors are at work to skew the results, but I plan to get a few more test comparisons to blood draw tests for my own meter if I can.

Protimenow - Along with the others, I'm so sorry about your recent stroke, and I hope you recover quickly and completely.

PEM - I've only been home testing with my Inratio2 meter for about 5 months, and have had only one occasion where I have both a meter test and a lab test taken from blood drawn within an hour or two of the meter home test. On that occasion, the home test result was 3.4 and the lab test result was 2.8
I just assumed the truth was somewhere in the middle, around 3.1, or that the lab test was probably wrong since the blood wasn't actually tested until the next day after shipment from the doctor's office to the lab.

But, your formula of home-test-raised-to-the-power-of-0.85 works perfectly for the 3.4/2.8 home/lab difference on the one data point I can add from my Inratio2 meter.

This, along with Protimenow's experience makes me a bit nervous too. If your formula is true for my Inratio2 meter as well, then to stay within a lab-draw reading range of 2.5-3.5, I should strive for a home-test range of 2.9-4.4
I have no problem with the 2.9 side, but if my home reading is 4.4, I usually consider that well out of range and adjust my warfarin accordingly.
To be within range under both test types, it would appear I should strive for a home test range of 2.9 to 3.5 on my home meter. From my history so far, this tighter home-test target range will be difficult, if not impossible for me to achieve.

I would also be interested in hearing from others as to whether their Inratio2 meters show the same offset from lab-draw INR tests.

Another test I plan to perform when I do my next home test, is to make two different tests with my meter within 1/2 hour of each other just to test the repeatability of the meter itself with no other variables (same test strip lot number).

When I get a new batch of test strips, and have two with different lot numbers, I'll check the repeatability again by running two tests within 1/2 hour of each other using strips from different lots.

I'll post with my results after these tests, but I would be interested to know if anyone has already done such tests and what their results were.

---------------------

UPDATE to Post:
I just found the following from the FDA website after a bit of googling:
http://www.accessdata.fda.gov/cdrh_docs/reviews/K092987.pdf

Although there is no date on the report, it does make for some interesting reading.

In the "analytical performance" part of the report, because Biosite/Alere is using the venous blood draw as the basis for the purported accuracy of the meter & test strip readings, it would seem that "Ground Truth" would be the venous blood draw method test results.

The other item that caught my eye is that there can be a +/- 0.5 INR offset, with a +/- 30% relative bias of the test strips within the therapeutic range for INR values of 2.0 to 4.5. I am making an assumption that the absolute offset and relative bias of the test strips is what is calibrated out when the test strip lot code number is entered into the Inratio2 meter for a given manufacturing lot.

The report seems to say that there is a residual overall accuracy level of about +/- 8.5% So, for a nominal reading of INR=3.0, this gives an allowed uncertainty of +/- 0.25 on the INR reading.

From PEM's results, and my own single point comparison with a blood draw test, it would seem that these accuracy results may not always be met. Perhaps some other factors are at work to skew the results, but I plan to get a few more test comparisons to blood draw tests for my own meter if I can.

Very interesting - thanks for the data point!

I have tested across InRatio machines (they sent me a new one and so I did a comparison before sending the old one back), across test strip lots, and within lots. All were consistent within 0.2. They seem very self-consistent and also reliably predict the lab result. So the question remains, if we can't operate in the narrow range you defined (2.9 - 3.5 on the home meter), then how do we ascertain which result is the unbiased, ground truth result - the lab or the meter?

Toward that end, I had an interesting conversation with a subject matter expert from Quest Diagnostics yesterday. Among the enlightening things she had to say was that there is another test called the chromogenic factor 10 assay. It exists to measure anticoagulation for people with the Lupus anticoagulation factor, who cannot be reliably tested using standard methods. So the idea she proposed was that I could have the factor 10 test done to try to determine whether the lab result or the home meter result is correct. I am waiting to find out if the factor 10 test actually produces an "equivalent INR" number or just measures whether or not someone is in range. Also, I suspect the therapeutic range used in the factor 10 test, which is 11% to 42% corresponds to an INR range of 2.0 to 3.0, which is more typical than the mech valve range of 2.5 to 3.5 that we try to stick to.

Another interesting bit of information from the Quest SME is that the reagents used to measure INR are assigned ISI values based on their responsiveness to "gold standard" patient blood samples. In other words, they use an antiquated method called the "tube tilt test" to visually time the onset of clotting. This procedure is meticulously used to generate blood samples that correspond to INRs of 1.0, 1.5, 2.0, 2.5, and 3.0. The reagent manufacturers then use these predetermined samples to generate their sensitivity indices (ISI values). What's notable here is that the gold standard patient samples only go up to an INR of 3.0. That could underlie the observation that above an INR of 3.0, home meters and lab tests become less reliable and less consistent with each other. One rationale, consistent with my conversion model, is that the relationship is roughly linear up to 3.0, but that nonlinearities in the relationship become more pronounced after that. So if labs or meters use linear extrapolation above 3.0, they may under or over-predict.

Thanks a lot for the FDA info. I don't understand it yet, so I will have to carefully reread your posting :)

Best,
pem
 
Pem - very interesting stuff.

There is a new meter that I'm trying to learn more about, and it apparently uses a method that is different from the one used by other meters to determine INR.

Currently, it may be a bit more expensive than the InRatio or CoaguChek XS, but the manufacturer claims more accuracy and a stronger association with ACTUAL prothrombin times. When I learn more, I'll post more information (or maybe put it on one of my websites for all to read).

---

The odd irony may be a decade or two down the road -- we'll finally have cheap, extremely accurate meters that are readily available. The FDA may realize that it's not going to hurt people to know their INRs, so it won't allow a prescription to get the stupid meters. And, by that time, perhaps there will be a drug specifically made to address the clotting issues related to A-Fib and implanted mechanical valves; without side-effects; with an available antidote (if necessary); and proven to be safe and effective (so meters won't be necessary). Of course, the drug will probably cost $10 a day - so we may STILL be using Warfarin.
 
There is a new meter that I'm trying to learn more about, and it apparently uses a method that is different from the one used by other meters to determine INR.

What is the name of this new meter? Thanks for posting your findings...

Thanks,
pem
 
In the "analytical performance" part of the report, because Biosite/Alere is using the venous blood draw as the basis for the purported accuracy of the meter & test strip readings, it would seem that "Ground Truth" would be the venous blood draw method test results.

The other item that caught my eye is that there can be a +/- 0.5 INR offset, with a +/- 30% relative bias of the test strips within the therapeutic range for INR values of 2.0 to 4.5. I am making an assumption that the absolute offset and relative bias of the test strips is what is calibrated out when the test strip lot code number is entered into the Inratio2 meter for a given manufacturing lot.

Two questions:

1) What reagent and instrument were used to analyze the venous blood draw used for comparison?

2) What is the difference between "offset" and "bias"?

Apparently, labs undergo a mandated QC process by which they participate in a test that compares their INR values to others in the same peer group (who use the same reagent). Those who participate in the College of American Pathology Survey are then privy to the results (if they pay for them). These results reveal differences in reported values across peer groups. Thus, if we were privy to these results we'd be able to see the extent to which values from labs using different reagents might vary.

From what I understand, ISI values above 2.0 for reagents used in the lab test are considered unreliable. Lower values are considered better - closer to 1.0. The lab from which I've obtained the results reported at the top of this thread is Clinical Pathology Laboratories. Their reagent has an ISI value of 1.8. I intend to get a parallel test this week at a Quest Lab, which uses a reagent called Innovan with an ISI value close to 1.0. I may also pursue the factor 10 test if that seems worthwhile. I'll report my findings back to the forum.

Best,
pem
 
PEM - From the following sentence in the report:
The capillary INRatio/INRatio2 results were compared to the venous reference method to determine accuracy and correlation.
I take this to be the answer to your question:
"how do we ascertain which result is the unbiased, ground truth result - the lab or the meter?"
with the answer being the lab draw test.

I realize that my one single data point by itself is not statistically significant, but I find it curious that it follows the rule you have determined from your larger data sample.

Also, my meter is fairly new - less than 6 months old - sent directly to me by Phillips/Alere.

I do not have the Lupus complication, so for my results, this should not be a factor.

I may have done things wrong on a few tests, as I now understand after reading in the above referenced report, which explains just how the test works. Because the test bases its result on the time between electrical impedance measurement minimums, NEVER add more blood to the sample after the first drop hits the test strip. I have, on occasion, let a second drop add to the reservoir if the initial drop seemed on the small size.
Clearly, this is a no-no.

Other than that, and only a few tests might be effected by that procedure, I don't have any reason to believe that my home-test meter should differ from the lab draw by more than 0.25. But, from my one and only data point it did. So, I do plan to investigate a bit more, although I won't drive myself crazy over it, my scientific curiosity is aroused.

Also - to answer the question you asked in another thread, I have found the best algorithm to use for warfarin dosing to be the algorithm in table-1 of the report at:

http://www.hopkinsmedicine.org/hema...ing_algorithm_Kim_YK_and_Kaatz_S_JTH_2010.pdf

The left side of table-1 addresses target INR of 2.0-3.0, while target INR of 2.5-3.5 is addressed in the right side of the table.


To add an answer to your question above:
2) What is the difference between "offset" and "bias"?

The "offset" is a constant difference between the two results, and can be viewed as the difference in "zero-crossing" values of the "best-fit" lines plotting the results of the two test methods.
The "bias" , in this case, I take to be a difference in the slope of the "best fit" line of the meter test strip results versus the lab draw results. it's just the slope if you assume they do a straight-line fit. It might be more a more complicated correction of the line shape as well as slope if they do a higher-order curve fitting.

From my reading of the report, I gather that they arrive at corrections/calibrations for these factors for each batch of test strips, and those correction factors are used by the meter to adjust the results for the calibration factors unique to each lot of test strips. That's likely the reason why you need to enter/verify the test strip lot number code into the Inratio2 meter before each test.
 
Wow. These discussions flash me back to my Biostatistics classes.

Yes, as I understand it, Newmitral's statement that the strip makers test and calibrate each lot of strips to tell the meter how to adjust the results in order to get an accurate result (I'm paraphrasing) is correct. It's also the reason that each batch of CoaguChek XS strips comes with a code chip, and the Protime cuvettes have the calibration and date information printed in a clear plastic label that is stuck to the cuvette during manufacture. Wrong code can equal wrong result.

I'm guessing that the strip manufacturers may be even more careful at making sure each batch of strips is accurate, because any provable error will ultimately be traced to them. By contrast, a lab supply company that makes the reagents can probably point their fingers at a lot more places -- lab error, improper storage, improper handling of blood sample, etc.

The idea of going to a few different labs - each using different reagents - is a really good one. (The idea of finding a lab or university that is interested in testing the accuracy of various reagents using the same blood sample would also be interesting). Even going to two different labs that use the SAME reagent may help determine how much a lab's results can be trusted.
 
I may have done things wrong on a few tests, as I now understand after reading in the above referenced report, which explains just how the test works. Because the test bases its result on the time between electrical impedance measurement minimums, NEVER add more blood to the sample after the first drop hits the test strip. I have, on occasion, let a second drop add to the reservoir if the initial drop seemed on the small size.
Clearly, this is a no-no.

Other than that, and only a few tests might be effected by that procedure, I don't have any reason to believe that my home-test meter should differ from the lab draw by more than 0.25. But, from my one and only data point it did. So, I do plan to investigate a bit more, although I won't drive myself crazy over it, my scientific curiosity is aroused.

Also - to answer the question you asked in another thread, I have found the best algorithm to use for warfarin dosing to be the algorithm in table-1 of the report at:

http://www.hopkinsmedicine.org/hema...ing_algorithm_Kim_YK_and_Kaatz_S_JTH_2010.pdf

The left side of table-1 addresses target INR of 2.0-3.0, while target INR of 2.5-3.5 is addressed in the right side of the table.


To add an answer to your question above:
2) What is the difference between "offset" and "bias"?

The "offset" is a constant difference between the two results, and can be viewed as the difference in "zero-crossing" values of the "best-fit" lines plotting the results of the two test methods.
The "bias" , in this case, I take to be a difference in the slope of the "best fit" line of the meter test strip results versus the lab draw results. it's just the slope if you assume they do a straight-line fit. It might be more a more complicated correction of the line shape as well as slope if they do a higher-order curve fitting.

From my reading of the report, I gather that they arrive at corrections/calibrations for these factors for each batch of test strips, and those correction factors are used by the meter to adjust the results for the calibration factors unique to each lot of test strips. That's likely the reason why you need to enter/verify the test strip lot number code into the Inratio2 meter before each test.

Thanks so much for the great dialog on this. Thanks specifically for the pointer to the dosing algorithm and replies to my other queries. Your explanation of offset vs bias is helpful. I suspect the home meters assume a linear bias, which is why a power function is needed in my predictive model to adjust for this. What do you think?

If you ever dig deeper and do another parallel comparison (try to do POC and lab tests within an hour of each other without eating or drinking in between), please let me know your results.

I did a side-by-side test today between my home meter and another point-of-care meter (Coaguchek XL) at the cardiologist's office. The results were InRatio=3.0, Coaguchek=2.9. I also did two different lab draws (yes I was driving around like a crazy person), one at CPL and one at Quest. My model predicts that the CPL result will be 2.5 (just on the edge of therapeutic). The CPL reagent has an ISI of 1.80 and the Quest reagent has an ISI of about 1.0, which should be, in theory, closer to ground truth. I will post both lab results when I get them.

Best,
pem
 
I plan to do some parallel testing, myself.

Yesterday, an anticoagulation clinic did a test using a meter made by ITC (the manufactureres of the ProTime meters), and got a 2.5. Interestingly, the ITC meter is one that they can, apparently, actually CALIBRATE. (Come to think of it, ITC has some standard value test solutions -- you put the solution onto the test strip and run the test to check for accuracy -- but I don't know how an adjustment is made).

I haven't bridged since Tuesday night, and I got an InRatio 2 meter to replace my possibly defective InRatio meter. I'll test with both meters and see what they tell me. And, friends, I'll share the results here.

I have an appointment for a blood draw next week, and guess what I'll be doing with my meters.
 
I suspect the home meters assume a linear bias, which is why a power function is needed in my predictive model to adjust for this. What do you think?

My personal preference is to always provide a factual document or scholarly article reference if I can, like with the manufacturer's FDA report or the dosing algorithm, but in this case all I can offer is speculation. So, please consider the following in that light.

My opinion is that the home meters likely do use a higher order curve fitting algorithm, probably at least a third order polynomial or better. These will generally lower the residual error below that of just a linear best fit. The meter/test strip manufacturer's goal, as indicated in the FDA report, is to get the residual error down as much as possible after the correction is applied, so that makes the most sense. Without seeing some report that actually contains the raw data it is difficult to know. Furthermore, since the test strip code has 5 characters, each of which can have one of about 35 values (alpha or numeric - not sure if they use both zero and letter O or number 1 and letter I), there is far more information available in the strip code than would be necessary for just a linear correction for each test strip lot.

I think what your predictive model (lab=meter^0.85) is giving you is an additional correction of the meter sensitivity to your Thalassemia condition. This is purely speculation on my part, but past postings on this forum seem to indicate that others believe the meter may be sensitive to that condition, and could give wrong readings in that situation.
The Professional User Guide for the meter, available at:
https://sdmctrlprod.biosite.com/MC/main/mastercontrol/vault/view_doc.cfm?ls_id=1637AE860C782040C3
does say "A hematocrit (percentage of blood that is red blood cells) that is higher or lower than
the validated operating range of the INRatio®2 system can cause an inaccurate result.
"

You may have taken enough data for your own case to arrive at an empirical correction factor to account for this with your blood characteristics.
It would take some more research, and possibly direct discussion with the meter manufacturer to confirm that speculation, but it seems reasonable to me that it might be what is going on in your case.

My own meter should be reading within 0.25 of the venipuncture lab test, and I need to do further testing and comparison of my own meter to see why the one data point I do have exceeds that difference. I don't have any such complications that could account for it in my case. I will check my meter for self-consistency, but it may be another month or two before I have the opportunity to go to the lab for a venipuncture INR test, at which time I'll do a meter test at the same time to get another data point. Hopefully the meter will be within the 0.25 allowable error range for me next time, or I may need to talk with the meter manufacturer myself - maybe to get another meter.
 
This discussion (and I'm a willing participant) may have been enough to put most non-scientists/non-statisticians to sleep. I think we should be able to accept the probability that no lab test can be completely reliable or accurate. With the inability to ABSOLUTELY assure that the thrombin reagent is completely accurate; plus the inability to control the handling of blood once it's been drawn, having a lab INR result can be close but not necessarily 100% accurate.

Meters pose somewhat different problems. There are always questions about the method used to obtain the sample -- for the CoaguChek XS or InRatio and InRatio 2, you have to ask if the blood was put onto the strip within the required fifteen seconds. You also have to wonder if a second drop was added to the strip so that there was enough to sample. You have to wonder if the meter was kept perfectly still (for example, sitting on a solid surface, like a table) or held in the tester's hand. Although the reagents used in the strips are carefully tested, I don't think it's been revealed what they're tested AGAINST. If the strips are calibrated to the somewhat imperfect reagents used by labs, the results can only be as good as a lab result.

What may matter here is that: a) we may never REALLY know what a PERFECTLY ACCURATE INR is, whether performed by a lab or a meter and b) it may not really matter all that much as long as we're at a comfortable spot within our range.

These ranges were developed based on clinical experience and animal tests -- and there are different ranges for valvers than there are for A-fib patients. Plus, it may be argued that being below range, BRIEFLY, with efforts to quickly get back into range, may not be life threatening.

What I'm saying is that, regardless of test method, if your meter is accurate (and I'm not sure about my old meter, which may have contributed to my stroke), and you're in range, you should be okay.

However, based on my recent experience, it's still a great idea to compare a meter's result with a lab result of a meter of known (relative) accuracy.
 
I used to have a lot of faith in my meters. A few years ago, I had a concussion. The day before, I tested with a ProTime meter and had a 2.9. The day of my concussion, the hospital's lab results were 2.92. It made me pretty confident in the meter's results. When I got my InRatio, I tested on both machines and got the same result. I was pretty confident in the accuracy of the InRatio.

When I had a low INR reported by the InRatio machine, I also checked on my CoaguChek S and got very similar results. With an increase in Warfarin levels, I quickly resolved my low INR within a few days (which was safe, according to published protocols).

On Wednesday, I had a stroke - not too much damage (it's only four days later and I'm able to type this). As soon as I felt the strange symptoms, I tested my blood with the InRatio and took two aspirins. The InRatio reported 2.6. The hospital lab result - a day later - was 1.7. The next day, it was 1.6.

This makes me wonder about the InRatio machine -- how could it give me a 2.6, when a blood draw showed the below range 1.7? If it was underreporting my actual INR, how long was it doing this? Why weren't the quality controls on the strips showing errors? How long was I in the unsafe range, and did this erroneous reporting of my tests contribute to my stroke?

I took my warfarin EVERY night and never missed it (until the night I was in the E.R. for this damned thing). I was consistent with diet and activity. I trusted the meter. HOW COULD THIS HAVE HAPPENED?

I'm going back to the hospital on Monday for a blood draw. Although I don't like the idea, I may get involved with the hospital's coumadin clinic. I'll be talking to Alere - and maybe have an attorney give them a call, too.

It's hard NOT to trust these meters - especially because clinics and labs use, and rely on them. However, from my recent experience, I wonder about their continued accuracy and the quality of their QC tests.

Sorry to hear about your stroke, glad you are doing better. I 'm a little confused why you think your machine was wrong. If I've read your post correctly you had a 2.6 one day at home but the hospitals 1.7 wasn't until the next day? Off the top of my head isnt the margin of error somewhere at about .5 so the 2.6 could be 2.1 and the 1.7 24 or so hours later isnt that far off considerring its margin of error.. Then your INR also could have been slowly dropping, which could also play a part in the 1.7 a day later. I guess it is possible that your machine was far off,but since you can gt slightly different results if you do 2 tests in a row.. I'm not sure I would hire a lawyer since chances are your machine wasn't out of range.

I am curious tho I know you were looking for expired strips recently, were your strips in date or expired?
 
I also am sorry to hear of your stroke and hope you are recovering well.

I also thought of your request for newly expired strips when I read your recent posts. Perhaps those strips should not have been used?

I've followed many of your ACT posts over the last year or two and worried a bit about you ordering from e-bay and warfarin from India.
I'm sure your priority before rushing to a lawyer is to recover well and get good coumadin testing established by one method or another.

Best Wishes.
 
The strips I was using were NOT expired - they were current strips from InRatio. The warfarin that I am using is from a local pharmacy--but it is not unlikely that it's the same stuff that I once got from India. I haven't bought warfarin from India in a few years - and when I did, I carefully checked my INR to be confident that it was effective.

And I didn't say that I was rushing to a lawyer. Also - the InRatio guarantee that came with my new machine specifically absolved the company of any responsibility for error.

My primary concern was about older units perhaps not retaining their accuracy. When I was copying results off my meter, it was showing that the battery was low -- I'm suspicious that the battery may also have been low enough to cause an error. My concern is for others, if it truly is the case that old meters with weak batteries may be inaccurate.
 
When I get a new batch of test strips, and have two with different lot numbers, I'll check the repeatability again by running two tests within 1/2 hour of each other using strips from different lots.

I'll post with my results after these tests, but I would be interested to know if anyone has already done such tests and what their results were.

Did that earlier this year. Test results same with a strip from each lot #.
 
Lyn:

I'm surprised to read about a margin of error of .5. My guess is that the labs probably want you to believe that there is minimal error -- certainly within .1 or .2. The strip makers also wouldn't be comfortable with a stated margin of error of .5. This is the reason why I was surprised by a drop of .9 in so short a period of time. I didn't take anything that would cause my INR to drop, and the effect of a missed dose when I was in the E.R. shouldn't have been responsible for that large a difference. I will be checking my two meters against each other - but this won't tell me what happens to the older meter when the battery gets low.

Perhaps oddly enough, when I get results from my meter, I like to think that it is accurate - rather than within a particular margin of error. I rarely make any dosage adjustments, relying on the accuracy of the meter. My reliance on a relatively accurate meter may have contributed to my TIA, ad I'll probably continue to rely on the accuracy of my replacement meter. I'll just have to remember that the value is probably not exact, and try to shoot at mid-range.

As far as asking for recently expired strips is concerned -- from past experience, these have worked fine in my meter. I couldn't afford to buy 'fresher' strips, so I looked at the 'expired' strips as a way to get them more affordably. (If anyone out there has some NEW InRatio strips that they'd like to send me, for the cost of postage, I'd gladly accept them).

Sorry to hear about your stroke, glad you are doing better. I 'm a little confused why you think your machine was wrong. If I've read your post correctly you had a 2.6 one day at home but the hospitals 1.7 wasn't until the next day? Off the top of my head isnt the margin of error somewhere at about .5 so the 2.6 could be 2.1 and the 1.7 24 or so hours later isnt that far off considerring its margin of error.. Then your INR also could have been slowly dropping, which could also play a part in the 1.7 a day later. I guess it is possible that your machine was far off,but since you can gt slightly different results if you do 2 tests in a row.. I'm not sure I would hire a lawyer since chances are your machine wasn't out of range.

I am curious tho I know you were looking for expired strips recently, were your strips in date or expired?
 
I haven't bridged since Tuesday night, and I got an InRatio 2 meter to replace my possibly defective InRatio meter. I'll test with both meters and see what they tell me. And, friends, I'll share the results here.

I have an appointment for a blood draw next week, and guess what I'll be doing with my meters.

Great! Looking forward to your results.
pem
 
My personal preference is to always provide a factual document or scholarly article reference if I can, like with the manufacturer's FDA report or the dosing algorithm, but in this case all I can offer is speculation. So, please consider the following in that light.

My opinion is that the home meters likely do use a higher order curve fitting algorithm, probably at least a third order polynomial or better. These will generally lower the residual error below that of just a linear best fit. The meter/test strip manufacturer's goal, as indicated in the FDA report, is to get the residual error down as much as possible after the correction is applied, so that makes the most sense. Without seeing some report that actually contains the raw data it is difficult to know. Furthermore, since the test strip code has 5 characters, each of which can have one of about 35 values (alpha or numeric - not sure if they use both zero and letter O or number 1 and letter I), there is far more information available in the strip code than would be necessary for just a linear correction for each test strip lot.

I think what your predictive model (lab=meter^0.85) is giving you is an additional correction of the meter sensitivity to your Thalassemia condition. This is purely speculation on my part, but past postings on this forum seem to indicate that others believe the meter may be sensitive to that condition, and could give wrong readings in that situation.
The Professional User Guide for the meter, available at:
https://sdmctrlprod.biosite.com/MC/main/mastercontrol/vault/view_doc.cfm?ls_id=1637AE860C782040C3
does say "A hematocrit (percentage of blood that is red blood cells) that is higher or lower than
the validated operating range of the INRatio®2 system can cause an inaccurate result.
"

You may have taken enough data for your own case to arrive at an empirical correction factor to account for this with your blood characteristics.
It would take some more research, and possibly direct discussion with the meter manufacturer to confirm that speculation, but it seems reasonable to me that it might be what is going on in your case.

My own meter should be reading within 0.25 of the venipuncture lab test, and I need to do further testing and comparison of my own meter to see why the one data point I do have exceeds that difference. I don't have any such complications that could account for it in my case. I will check my meter for self-consistency, but it may be another month or two before I have the opportunity to go to the lab for a venipuncture INR test, at which time I'll do a meter test at the same time to get another data point. Hopefully the meter will be within the 0.25 allowable error range for me next time, or I may need to talk with the meter manufacturer myself - maybe to get another meter.

As usual, I greatly appreciate your reasoned response.

A few comments:

The five alpha-numeric digits may also be used to contain a checksum and possibly some sort of serialization, but your rationale for the use of a nonlinear calibration function in the meters seems plausible.

Thanks for the link. I'd really like to know what they consider the "validated operating range". Unfortunately, I cannot access the link - it seems to require a username/password. If I knew that range I could easily compare my own hematocrit. Those with Thalassemia Minor trait do tend to have a low/normal or low hematocrit, but not as low as you might expect because of the higher compensatory RBC. I think my hematocrit tends to be at the low end of normal but, again, I don't know if that's in the "validated operating range".

I will be very curious to learn the results of your own side-by-side test.

Thanks again,
pem
 
Off the top of my head isnt the margin of error somewhere at about .5 so the 2.6 could be 2.1 and the 1.7 24 or so hours later isnt that far off considerring its margin of error.. Then your INR also could have been slowly dropping, which could also play a part in the 1.7 a day later.

This raises an interesting and somewhat related question for me. Does anyone know if the INR scale corresponds linearly with therapeutic effect? I'm not sure if this is the right way to ask the question. What I mean is, if my therapeutic range is 2.5 to 3.5, is there a bigger anticoagulation difference between 2.0 and 2.5 than there is between 3.5 and 4.0? An analogy would be scales of time vs speed. If it takes you 4 seconds to run across the room and then someone else runs across the room in 6 seconds, they were 1 1/2 times as slow, but if they run across the room in 2 seconds they were twice as fast. In this example, differences on the low end of the scale are bigger practical differences than on the high end of the scale. Does that make sense?

Thanks,
pem
 
I did a side-by-side test today between my home meter and another point-of-care meter (Coaguchek XL) at the cardiologist's office. The results were InRatio=3.0, Coaguchek=2.9. I also did two different lab draws (yes I was driving around like a crazy person), one at CPL and one at Quest. My model predicts that the CPL result will be 2.5 (just on the edge of therapeutic). The CPL reagent has an ISI of 1.80 and the Quest reagent has an ISI of about 1.0, which should be, in theory, closer to ground truth. I will post both lab results when I get them.

As it turns out, both lab results were identical: 2.5, which is consistent with the model's predictions.

I don't want to give too much weight to this result, but it compels me in the direction of trusting the lab result as "ground truth". I will likely perform two more lab-to-lab comparisons over the next couple of weeks to develop better statistical confidence.

One conclusion of which I feel fairly confident is that all of the confounding factors that have been discussed are not adding much variance to my results. This conclusion is based on the observation that in all within and across-meter and across-meter-brand testing, the results have been the same within 0.1. And the lab results predicted by the meters have been very close as well. So the reliability and consistency of the results of both modes of testing seems fine. It is only the validity (which one is actually the right number) of the results that is still in question for me. But please check my reasoning on this.

Thanks,
pem
 
The phrase '1 and 1/2 times as slow' is one of those confusing statements that has no meaning for me.

My understanding of the way INR is measured is that it's a linear measure of time to clot. A person with an INR of 2.0 takes twice as long (or one time longer) to clot as a person with an INR of 1.0. An INR of 2.5 takes 2.5 times as long (or 1.5 times longer) to clot than a person with an INR of 1.0. The INR was developed in order to make sense of the varying prothrombin times that labs received based on differing thrombin reagents.

So - in answer to your question, INR, as I understand it, is linear, relating to the standard of 1.0.

I'm going to the lab for a blood draw this afternoon - inconvenient and probably unnecessary - but in the interest of science, I'm doing it. Before I go, I'll test my blood using strips from the same batch, on an InRatio and an InRatio2. Results may be interesting...
 

Latest posts

Back
Top