6 min read

Myriad Cuts Document Processing 80% Faster Using AWS GenAI

AI

ThinkTools Team

AI Research Lead

Introduction

Healthcare organizations routinely grapple with the sheer volume of paper‑based and scanned documents that must be processed to support clinical decision‑making, billing, and regulatory compliance. For a company like Myriad Genetics, which specializes in genomic testing, the prior‑authorization workflow is a critical bottleneck: clinicians submit detailed reports, insurance carriers require structured information, and any delay can postpone patient care or trigger claim denials. Traditional rule‑based or commercial OCR solutions offered limited accuracy and required extensive manual review, driving up operational costs and slowing throughput.

In 2024, Myriad Genetics partnered with the AWS Generative AI Innovation Center to re‑engineer its document‑processing pipeline. By embracing Amazon Bedrock’s foundation models and the newly released open‑source GenAI Intelligent Document Processing Accelerator, the team was able to transform a legacy batch system into a real‑time, highly accurate, and cost‑effective solution. The result was a dramatic 80 % reduction in processing time, a 77 % cut in associated costs, and an impressive 98 % classification accuracy that exceeded the company’s internal benchmarks. This post walks through the technical journey, the optimization strategies that drove these gains, and the measurable business impact on Myriad’s prior‑authorization workflows.

Main Content

The Challenge of Healthcare Document Processing

The primary hurdle for Myriad Genetics was the heterogeneity of the documents it received: handwritten notes, scanned PDFs, and digitally generated PDFs with varying layouts and terminologies. Each document type required a different extraction strategy, and the volume—tens of thousands of files per month—made manual triage untenable. Moreover, the regulatory environment demanded that extracted data be accurate and auditable, adding another layer of complexity.

Leveraging AWS Generative AI Foundations

To address these challenges, the team turned to Amazon Bedrock, which provides access to a suite of foundation models such as Claude, Gemini, and GPT‑4. Bedrock’s “prompt‑engineering” capabilities allowed the developers to craft prompts that could instruct the model to identify key sections (e.g., patient demographics, test results, and insurance information) and to output structured JSON. However, Bedrock alone was not sufficient; the raw inference cost and latency were too high for a production environment that required near‑real‑time responses.

Building the Accelerator Pipeline

Enter the GenAI Intelligent Document Processing Accelerator, an open‑source framework that sits atop Bedrock and streamlines the entire pipeline. The accelerator provides a modular architecture where each stage—pre‑processing, inference, post‑processing, and validation—can be configured independently. Myriad’s engineers leveraged the accelerator’s pre‑processing module to perform OCR with Amazon Textract, then fed the extracted text into Bedrock’s foundation models via the accelerator’s inference wrapper. The accelerator’s caching mechanism reduced redundant calls to Bedrock, dramatically cutting inference costs.

The post‑processing stage used a lightweight rule‑engine to map the model’s JSON output to Myriad’s internal data schema. Validation rules were added to flag low‑confidence fields for human review, ensuring that the final dataset met compliance standards. By containerizing the entire pipeline in AWS Lambda and orchestrating it with Step Functions, the team achieved a serverless architecture that automatically scaled with document volume, eliminating the need for over‑provisioned compute resources.

Optimizing Classification and Extraction

Achieving 98 % classification accuracy required a multi‑layered optimization strategy. First, the team curated a domain‑specific dataset of labeled documents and used it to fine‑tune a Bedrock model via the accelerator’s fine‑tuning hooks. This step reduced the model’s reliance on generic prompts and improved its ability to recognize medical terminology and insurance codes.

Next, the accelerator’s prompt‑templating engine was employed to provide context‑aware prompts that varied based on document type. For example, a prompt for a handwritten note would include instructions to focus on free‑form text, whereas a prompt for a structured PDF would instruct the model to parse tabular data. The accelerator’s built‑in confidence scoring allowed the pipeline to automatically route uncertain results to a human review queue, ensuring that the overall accuracy remained high without sacrificing throughput.

Finally, the team implemented a cost‑optimization loop: by monitoring the accelerator’s usage metrics, they identified periods of low activity and adjusted the Lambda concurrency limits accordingly. This dynamic scaling prevented over‑provisioning during off‑peak hours and contributed to the 77 % cost reduction.

Business Impact and ROI

The transformation had a tangible impact on Myriad Genetics’ prior‑authorization workflow. With processing times slashed by 80 %, clinicians could submit requests and receive approvals in a matter of hours instead of days. The reduction in manual review effort freed up 30 % of the compliance team’s time, allowing them to focus on higher‑value tasks such as audit preparation and stakeholder communication.

From a financial perspective, the 77 % cost reduction translated into a savings of approximately $1.2 million annually, based on the company’s previous spend on OCR, manual review, and legacy infrastructure. The improved accuracy also reduced claim denials by 12 %, directly boosting revenue and enhancing patient satisfaction.

Conclusion

Myriad Genetics’ partnership with AWS’s Generative AI Innovation Center demonstrates how a well‑architected, open‑source accelerator can unlock significant operational efficiencies in a highly regulated industry. By combining Bedrock’s powerful foundation models with the GenAI Intelligent Document Processing Accelerator’s modular pipeline, the company achieved unprecedented speed, accuracy, and cost savings. The case study underscores the importance of domain‑specific fine‑tuning, dynamic scaling, and rigorous validation in deploying generative AI at scale.

The success story also highlights a broader trend: as generative AI models mature, the bottleneck shifts from model performance to integration and operationalization. Organizations that invest in flexible, open‑source frameworks will be better positioned to adapt to new data sources, regulatory changes, and evolving business needs.

Call to Action

If your organization is wrestling with document‑heavy workflows—whether in healthcare, finance, or legal—consider exploring AWS’s Generative AI ecosystem. Start by evaluating your data pipeline for bottlenecks, then experiment with Bedrock’s foundation models to prototype extraction tasks. Finally, leverage the GenAI Intelligent Document Processing Accelerator to build a scalable, cost‑effective solution that can be deployed across your enterprise. Reach out to the AWS Generative AI Innovation Center today to learn how you can accelerate your own transformation journey and unlock measurable business value.

We value your privacy

We use cookies, including Google Analytics, to improve your experience on our site. By accepting, you agree to our use of these cookies. Learn more