Neptune Rum is the ‘World’s Most Awarded Rum 2018’


To be able to define what is meant by the ‘World’s Most Awarded Rum’ it was necessary to determine and justify:

  1. How competitions would be included/excluded from the results
  2. The method by which the competition results would be obtained and deemed as accurate
  3. How the awards would be collated and ultimately analysed
  4. How the information gathered could be standardised/homogenised to be accurately counted and compared
  5. Why specific and defined award categories would/would not be included


To be able to achieve 1-5 above, the following research was conducted and subsequent data analysed. This detailed analysis can be provided upon request, subject to the authorisation of the Neptune Beverage Company Board.

1. Data collection

The competition results used within this study have been taken directly from the official competition award websites.

The justification for inclusion & exclusion of competitions was as follows:

  • The competition must be ‘inclusive’ and as such open to all SKUs across all Brand Portfolios and therefore, by definition, must be International, Global, and not Regional
  • The competitions were only included if they occurred annually, not monthly. For this reason any awards given within monthly reviews have been excluded
  • A definitive list of all Global competitions providing awards for ‘Rum’ was not found to be available. As such, online searches were conducted to find annual international spirit industry competitions occurring during 2018
  • Despite extensive online searches, the list of competitions included within this study could not be entirely exhaustive. To ensure the inclusion of as many 2018 awards as possible, the competitions providing awards to the 5 top Brand Portfolios + 10 top scoring SKUs (as determined by this study, and) as published on their websites, have also been included
  • Competitions were excluded on the basis of non-publication and embargo of 2018 results (as at March 2019)
  • A total of 26 competitions were discovered and used within this study, on the basis on this rationale


The justification for inclusion & exclusion of awards within the competitions was as follows:

  • It was determined that an award only be included if given for the individual product/item, such as the value, quality or packaging of a SKU item. For this reason, awards for Social Media, Consumer Engagement, Social & Environmental responsibility across all Brand Portfolios were excluded
  • The award, as with the competitions, must also be ‘inclusive’. Therefore again, by definition, must be International, Global, and not Regional. The reason being the same as for competitions, as given above. Without Regional exclusions, a fair means of genuine comparison of awards across all SKUs could not be achieved


2. How the Awards were counted

Some competitions give Medal Awards (Gold, Silver etc.), some give generic category and others give only points. The following was carried out to allow for implementation of a point system, across all awards

  • Each competition gives awards in 1 of 3 ways. A tiered medal system, category, or on a 0-100 point score basis. To be able to collate and ultimately compare the aggregated results, it was necessary to implement a system of applying a numeric point value which could be attributed to each individual type of award. This then allowed a point system to be applied, comparatively, across each SKU and ‘Brand Portfolio’, enabling a means of totalling an assigned and ‘countable-value’ across all 3 award types
  • Research showed that despite competitions giving awards in these 3 ways, several had published (online) that their medalled or category awards were given on the basis of having applied the point system of 0-100. This provided further justification for the conversion of medalled/category awards, to points, for the purpose of this study. To further justify this conversion – these points, converted to medalled/category awards (where the medalled/category award resulted from the value of points being converted to medal ‘value’ (instead of the awarded ‘points’), were numerically aligned and similar to the competitions where ‘only’ points were given. i.e. the hierarchy of awarded points (to medals Gold, Silver etc.) were in-line with those competitions that have published their award-to-point-equation
  • Many of the competitions within this study have not published the basis (point or otherwise) for their determination of the awarded levels. Research has found 7 competitions where a point value was published alongside (10 of the 25) award categories found within this study (i.e. Platinum, Double Gold, Gold etc.). By taking the minimum & maximum point value (in that category) an aggregated mean-average could be assumed as reasonable, for further extrapolation. These average point values were therefore used as a means of applying an absolute point value to all awards, within that category, throughout this study
  • A (published) comparable point value for the remaining 15 (of 25 total) categories could not be found. In order to determine a reasonable point value, the following assumption has been used:
    • The attributed hierarchical-value/ordered weighting of those 15 categories was reviewed against the awards in categories where comparative published point values were available. As such, and in the absence of absolute numbers, a point score (using the medal/trophy point value assigned within this study) directly in-between¹ the ‘the-award-above²-&-the award-below³’, was applied (i.e. Platinum X value², Category? value¹, Gold with X value³)
    • It was deemed problematic to find a means of applying an indisputable value to these awards, without having similar comparative values as seen in the other 10 categories. In this absence, the rationale applied has been assumed as adequate and appropriate
  • Rationalising awarded categories – The original number of named awards was 36. These awards were aggregated and reduced to 25 to allow for the simplification of overall results. This was carried out where the category naming convention was deemed similar enough to combine under one description


3. How the award results were analysed

  • The results/awards from each competition were listed, with each award, for each SKU, appearing on a separate row. There were 1047 awards in total. Several individual SKUs frequently re-appeared in the original listings with slight variations, e.g. ‘Anos’, used instead of ‘Years’, or ‘YO’ instead of ‘Years Old’ etc. Each variation of a SKU has been modified & homogenised within this list, to ensure that each SKU only appears once. This was to ensure all awarded points could be applied & accumulated correctly to ‘1’ specific SKU. This modification also applied to the variable listings of the named ‘Brand Portfolios’. Again, due to the appearance of slightly different iterations these were modified to ensure points were accumulated accurately. In other columns were the points awarded to each SKU, and the named awards awarded to each SKU, within each competition


4. Validation and cross-checking of data

  • Cross referencing and validation formulas were applied at various points during the study, to ensure SKUs appeared only once, and to ensure all SKUs were contained in the aggregate point list, and all points were attributed to the correct Brand Portfolio


5. Collating converting & summarising totalled points across each SKU

a)  Each named award was collated to show a total number of a specific category award (Platinum, Gold, etc.) for each SKU
I.  (this was then also counted against awards attributed to a specific Brand Portfolio)
b)  Each awards of ‘points’, (as opposed to ‘named awards’) was then applied to each SKU
I.  (again, counted within the awards attributed to a specific Brand Portfolio)
c)  The named awards were then converted into their points value and totalled, per SKU
d)  This converted named value score, per SKU, was then combined with the numeric awards where points-only were given. This combined value provided the total number of points awarded to each SKU across all awards in all 26 competitions in 2018

6. Research Results


Highest individual point score for 1 SKU across all 26 competitions = NEPTUNE RUM

TOP 10 SKUs by point score

SKUTotal SKU point score
Neptune Barbados Gold2562
BACARDI Reserva Ocho 81340
BACARDI Reserva Limitada1067
BACARDI Gold / Carta Oro1038
Santa Teresa 1796 Solera790
BACARDI Anejo Cuatro764
BACARDI Superior / Carta / Blanca725
Rathlee’s Rum Golden Barrel Aged Rum634
BACARDI Gran Reserva Diez583
Parce 8 Year Old Rum561

Date: March 2019
Research carried out by: Nicky Cartwright