Potential Updates to Measurements and Rating System

I'm still trying to figure out some ways to improve the rating-system that I use my reporting.  It is getting better and closer to what I'd like to use permanently but there are still a few changes I am considering (will mention some thoughts below).  

For the month of September, I thought that Verizon having the lowest rating did not properly account for the good aspects of their network (good ping, download, and consistency).  I also thought that T-Mobile having the highest score may not account for their weak core/backhaul network.  I admit that T-Mobile seems a lot better with their backhaul now but there are still times when I see time-outs which don't show up in my automated testing.  I am looking for more metrics to better automate real-usage statistics like web browsing and video streaming.  This is the reason I am adding NPERF measurements

When I did the first report in August, I thought that T-Mobile was achieving too high of a score because even though their RAN network is great, the core was bad.  I felt that maybe too much weight was given to the quality of signal (rsrq) and the signal strength (rsrp).  This is when I tried to bring in some some additive components (good points that would negate some of the negative measurements).  

The problem with this method was that T-Mobile sometimes achieved a score over 100% because their RAN was so good and they also had great download speeds.

Here is how I do ratings now.

Everyone starts at 100% based on the total number of plots I have for that month.  1 point is deducted for each of the bad point criteria with the exception of download speeds being less than 1Mbps and anytime packet loss occurs.  Since these conditions are drastic, I wanted a way to make these conditions a heavier weight.

On the flip-side the good points add 1 point for download speeds greater than 50Mbps.  Anytime ping is less than 30 ms, a point is added as well (I may eventually make this a multiplier).  The good point that is a multiplier (multiplied times 5) is when download speeds are greater than 150 Mbps.  

I make the subtraction and addition of good and bad points and make this the numerator (top number) and divide by the total number of plot to come up with a percentage.

Bad Points

Qual/RSRQ <= -20
RSRP <= -130
Packet Loss > 0 (multiply number x 5)
Download Speeds < 5000 (5Mbps):
Download Speeds < 1000 (multiply number x 5)
PINGMAX > 200ms
Non-4G/5G Connection (maybe multiplier eventually)

Good Points

Download Speeds > 50000 (50Mbps)
PINGMIN <= 30ms (may eventually make this a multiplier)
Download Speeds > 150000 (150 Mbps) (multiply x 5)

Here is a re-cap on how ratings were for September.

Overall Winner: T-Mobile (98.94%)
Second Place: AT&T (98.58 %)
Third Place: Verizon (95.90 %)

Here are some thoughts.  I may write up a different post showing how these results would change with a few modifications to the rating system.

  1. Instead of having my G-NetTrack Pro app log every 1 seconds, I could log every 5 seconds or so.  This would help with bad spots not being counted as heavily.  The downside to this I may miss out on some of the other info I gather from this information.
  2. I could add up the total negative rsrq and rsrp points and divide that number in half.
  3. I could make ping being less than 30 ms a multiplier.  This would help Verizon's score.  I could also make anytime the phone reverts to a 2G or 3G signal a multiplier.

I'm not trying to change the order but just want to make sure an equal rating is applied to the good and bad for each carrier.

Here is what I think I would do (number 2 and 3)

Here are new results.

T-Mobile: 99.7
AT&T: 99.42
Verizon: 97.37

When I add the NPERF stuff next month, I will further refine.

What do you think?