Acts Bible

"Always ready to give a defense... with gentleness and respect" — 1 Peter 3:15

Translation Accuracy Scorecard

How we measure every translation against the manuscripts. Five dimensions, two anchors, full transparency.

Contents
  1. The Big Idea
  2. The Five Dimensions
  3. The Math Behind the Grades
  4. Word Alignment Algorithm
  5. Why Two Anchors?
  6. Calibration and Validation
  7. What the Scorecard Does Not Measure
  8. Glossary

The Big Idea

When translators turn Hebrew or Greek into English, they make thousands of small choices. Some choices stay very close to the original words. Others rearrange things to sound more natural in English. The Acts Bible measures, for every verse and every translation, how close each English version stays to the source text.

We grade each translation from A+ through F based on five things: how many of the original words are represented, whether the English meanings fall within the attested range of the concordance, whether the word order is preserved, whether words are added or omitted, and whether important theological terms stay consistent across the whole Bible.

This is not about which translation is "best." A paraphrase like The Message is doing a different job than a word-for-word translation like the NASB. The grade tells you what each translation is — literal, dynamic, or paraphrase — not whether it is good for your purpose.

The Five Dimensions

Every translation is scored on five independent dimensions. Each is a number from 0 to 100. The final letter grade is a weighted average — with semantic fidelity carrying the most weight because it is the hardest single thing to fake.

1. Word Coverage (25%)

How many of the original Hebrew or Greek words are represented by a corresponding English word in the translation? We exclude particles — Hebrew and Greek grammar markers that have no direct English equivalent.

2. Semantic Fidelity (30%)

Of the words that were aligned, how many fall within the attested semantic range of the source word? This is the most important dimension because it is very hard to fake.

3. Word Order Preservation (15%)

How closely does the translation preserve the order of the original text? We compute the longest increasing subsequence of aligned word positions.

4. Amplification and Reduction (15%)

Does the translation add or omit large amounts of content? English needs roughly 1.3 to 1.5 times as many words as Hebrew or Greek. This dimension penalizes both ends of the scale.

5. Key Term Consistency (15%)

For theological key terms like chesed, agape, and dikaiosune, we check that the translation renders them consistently throughout the Bible.

The Math Behind the Grades

literal_score = 0.25 x coverage
              + 0.30 x fidelity
              + 0.15 x order
              + 0.15 x amplification
              + 0.15 x key_term_consistency

Letter grades use standard academic bands:

GradeScoreMeaning
A+95-100Nearly perfect word-for-word fidelity
A90-94Excellent literal translation
A-85-89Strong literal with minor smoothing
B+80-84Mostly literal, some rearrangement
B75-79Balanced literal / dynamic
B-70-74Leans dynamic equivalence
C+65-69Dynamic equivalence
C60-64Liberal paraphrase
D50-59Heavy paraphrase
F<50Little resemblance to source at word level

Word Alignment Algorithm

To score a translation we first have to know which English word corresponds to which original word. The matching algorithm is greedy left-to-right with a sliding window of 5 to 8 positions, checking each target token against a candidate pool with these match priorities:

  1. Exact match (100): lowercase token appears in pool
  2. Stem match (90): token with suffixes stripped appears in pool
  3. Multi-word prefix match (80): token is a prefix of a pool word
  4. Fuzzy match (70 or 56): SequenceMatcher ratio above threshold
  5. No match: original word marked as missing

The entire alignment runs in pure Python against already-loaded lexicon data and completes in under 50 milliseconds per verse per translation.

Why Two Anchors?

Under the KJV anchor, we grade against the traditional English standard. Under the Acts Bible anchor, we grade against a translation built from manuscript evidence the 1611 committee did not have: the Dead Sea Scrolls, the Nestle-Aland 28th edition, and documentary papyri from Egypt.

The delta between the two scores is itself diagnostic. A translation scoring high under Acts Bible but low under KJV is tracking modern scholarship and diverging from tradition. A translation scoring high under both is threading the needle between old and new.

Calibration and Validation

We validated the scoring against ten hand-picked verses with expected grade bands. Expected behaviors:

What the Scorecard Does Not Measure

A paraphrase that scores low is not a bad translation. It is a different kind of translation, and the scorecard correctly reports that. Use it to understand what each translation is, not to pick a winner.

Glossary

Alignment
Mapping each English word to the original Hebrew or Greek word it represents.
Amplification
The ratio of target words to source words. Healthy range: 1.3-1.5x.
Anchor
The reference point for grading. KJV anchor = traditional standard. Acts Bible anchor = manuscript-based.
Coverage
Percentage of original content words with a corresponding English word.
Dynamic Equivalence
Translation philosophy prioritizing natural English over word-for-word fidelity.
Formal Equivalence
Translation philosophy prioritizing word-for-word fidelity.
Key Term
A theologically significant word tracked for consistency (e.g., chesed, agape).
Acts Bible
The AI-assisted translation with 9-layer provenance.
Provenance
The 9-layer decision chain documenting every translation choice.
Strong's Concordance
The 1890 concordance assigning unique numbers to every Hebrew and Greek word in the KJV.

Back to the Bible