Excel files and agents

Hello ,

I am trying to solve a problem where I am trying to create a agent which talks to my excel files ,

The excel files can be very messy and may not have fixed format. I was thinking of creating a rag system , where I load my excel file in a vector database and then retrive chunks to quey it , is this a good approach specifically if I am going to ask the agent for aggregation and some calculations

hi @sjayawant

I think RAG over embedded Excel chunks is the wrong approach:

  • retrival is top-k items - computation on partial dataset
  • chunking turns table into text and can lose rows/columns
  • llm isn’t a deterministic calculation engine

what you could do:

  1. Extract/understand structure from messy .xlsx (LangChain has UnstructuredExcelLoader (I’ve seen you already did it :slight_smile: ), including an “elements” mode that yields per‑sheet table elements and HTML in metadata)
  2. Normalize into tables (dataframes) and load into a real execution layer (SQLite/DuckDB/Postgres or pandas/Polars…)
  3. Use an agent to generate queries, but execute them with tools (e.g., LangChain Community’s SQL agent path via create_sql_agent(…, agent_type=“tool-calling”)), and return the query + results for audit
  4. Keep embeddings only for finding the right sheet/table/columns, then do the actual math in SQL/pandas etc.

And see this Build a SQL agent - Docs by LangChain

Hi Pawel , thanks again for course correcting me . I am going to follow the steps you mentioned , will keep you posted :slight_smile:

1 Like

Good luck :slight_smile: fingers crossed :crossed_fingers:

just a follow up question , I am able to load the data using the unstructured parser and even add the required data in dataframe then a sqllite table , but here is the problem . with every input the excel files the schema will chnage and will end up creating sperate tables for each input , so i created a single table excel_data with just two columns id and row_data id is auto generated column and row_data is a text column with json data for example

id , text_data

1 , location: loc1 ,custname:cust1, zip_code :zip1

2 location":loc2 ,custname:cust2,zip_code:zip2

but my sql agent is unable to query or parse json data

it gives me the following message

It seems necessary to parse the `row_data` JSON to identify fields that could correspond to ZIP codes and check for customers with the same ZIP code. Unfortunately, analyzing and matching ZIP codes directly within the current raw `row_data` column’s format is not feasible without explicitly identifying their presence.

hi @sjayawant thanks for you answer, I’ll follow up today :slight_smile:

Hey Pawel ,

I was able to query the json data but I feel the agent is not reliable . in my approach I created a table as follows

id , filename , tabname , schema , row_data.

In case there are multiple files and each file may have multiple sheets , I want the agent to first identify the correct file correct sheet and then it can get the right schema , I am currently working on improving the system prompt so that it goes through these steps . let me know your thoughts on it

Thanks

Sandeep

Hey @sjayawant ,

You’re thinking in the right direction — but the reliability concern you’re feeling is valid.

What’s happening right now is:

  • You’ve flattened everything into one generic table (id, filename, tabname, schema, row_data)

  • The agent now has to:

    1. Figure out the correct file

    2. Figure out the correct sheet

    3. Understand schema from JSON

    4. Parse JSON

    5. Do reasoning

That’s a lot of cognitive load for an LLM — and that’s exactly where reliability drops.


The Core Issue

Even if your system prompt forces:

“First identify file → then sheet → then schema → then query”

LLMs are not deterministic planners. Prompting helps, but it doesn’t guarantee correct step execution every time.

You’re essentially asking the agent to behave like a query planner + JSON parser + reasoning engine.

That’s fragile.


What I’d Suggest Instead

Rather than improving the system prompt further, improve the architecture:

:one: Don’t store rows as raw JSON if you want SQL reliability

If possible, normalize into structured columns dynamically per sheet.

Even if schemas differ per file:

  • Create one SQL table per sheet

  • Let the SQL agent inspect table schemas directly

LLMs are much better at:

“Here are 6 tables, pick the right one.”

Than:

“Parse JSON blob and infer hidden schema.”


:two: Separate discovery from execution

Make it two-step:

  • Step 1: Identify file + sheet (metadata reasoning)

  • Step 2: Query structured table with SQL tool

You can even make “select file/sheet” a deterministic tool instead of pure reasoning.


:three: Keep embeddings only for routing

If you have many files:

  • Use embeddings to retrieve relevant file/sheet metadata

  • Then pass only that table to the SQL agent

Reduce the search space.


About Your Current Prompt Strategy

Improving the system prompt to enforce ordered reasoning will help somewhat, but it won’t make it truly reliable.

When agents feel unreliable, it’s usually a sign that:

The problem should be pushed more into deterministic tools and less into prompt reasoning.


Honestly, you’re not wrong.
Your current design works — but it’s cognitively heavy for the model.

If you normalize schema per sheet and let SQL operate on real columns instead of JSON blobs, reliability will jump significantly.

You’re very close — just shift more responsibility from the LLM to the execution layer :+1:

1 Like

@Bitcot_Kaushal, thanks for the detailed review of the problem. I have removed the sheet and file identification from the system prompt so the llm has not to identify the sheet and file , but the approach of creating seperate table for each sheet worries me , if this agent goes into production then imagine 10 users using this agent ,for every run it will create a huge number of tables . and then after every run I will have to drop the tables else they will keep pilling up . Instead is it possible to have two agents , the first one queries only the schema column and returns the schema to the other agent which uses this keys to query the row_data column , would that approach work ?

Thanks

Sandeep

@sjayawant maybe you can try using skills here is an example skill


name: xlsx description: “Use this skill any time a spreadsheet file is the primary input or output. This means any task where the user wants to: open, read, edit, or fix an existing .xlsx, .xlsm, .csv, or .tsv file (e.g., adding columns, computing formulas, formatting, charting, cleaning messy data); create a new spreadsheet from scratch or from other data sources; or convert between tabular file formats. Trigger especially when the user references a spreadsheet file by name or path — even casually (like “the xlsx in my downloads”) — and wants something done to it or produced from it. Also trigger for cleaning or restructuring messy tabular data files (malformed rows, misplaced headers, junk data) into proper spreadsheets. The deliverable must be a spreadsheet file. Do NOT trigger when the primary deliverable is a Word document, HTML report, standalone Python script, database pipeline, or Google Sheets API integration, even if tabular data is involved.” license: Proprietary. LICENSE.txt has complete terms

Requirements for Outputs

All Excel files

Professional Font

  • Use a consistent, professional font (e.g., Arial, Times New Roman) for all deliverables unless otherwise instructed by the user

Zero Formula Errors

  • Every Excel model MUST be delivered with ZERO formula errors (#REF!, #DIV/0!, #VALUE!, #N/A, #NAME?)

Preserve Existing Templates (when updating templates)

  • Study and EXACTLY match existing format, style, and conventions when modifying files

  • Never impose standardized formatting on files with established patterns

  • Existing template conventions ALWAYS override these guidelines

Financial models

Color Coding Standards

Unless otherwise stated by the user or existing template

Industry-Standard Color Conventions

  • Blue text (RGB: 0,0,255): Hardcoded inputs, and numbers users will change for scenarios

  • Black text (RGB: 0,0,0): ALL formulas and calculations

  • Green text (RGB: 0,128,0): Links pulling from other worksheets within same workbook

  • Red text (RGB: 255,0,0): External links to other files

  • Yellow background (RGB: 255,255,0): Key assumptions needing attention or cells that need to be updated

Number Formatting Standards

Required Format Rules

  • Years: Format as text strings (e.g., “2024” not “2,024”)

  • Currency: Use $#,##0 format; ALWAYS specify units in headers (“Revenue ($mm)”)

  • Zeros: Use number formatting to make all zeros “-”, including percentages (e.g., “$#,##0;($#,##0);-”)

  • Percentages: Default to 0.0% format (one decimal)

  • Multiples: Format as 0.0x for valuation multiples (EV/EBITDA, P/E)

  • Negative numbers: Use parentheses (123) not minus -123

Formula Construction Rules

Assumptions Placement

  • Place ALL assumptions (growth rates, margins, multiples, etc.) in separate assumption cells

  • Use cell references instead of hardcoded values in formulas

  • Example: Use =B5*(1+$B$6) instead of =B5*1.05

Formula Error Prevention

  • Verify all cell references are correct

  • Check for off-by-one errors in ranges

  • Ensure consistent formulas across all projection periods

  • Test with edge cases (zero values, negative numbers)

  • Verify no unintended circular references

Documentation Requirements for Hardcodes

  • Comment or in cells beside (if end of table). Format: “Source: [System/Document], [Date], [Specific Reference], [URL if applicable]”

  • Examples:

    • “Source: Company 10-K, FY2024, Page 45, Revenue Note, [SEC EDGAR URL]”

    • “Source: Company 10-Q, Q2 2025, Exhibit 99.1, [SEC EDGAR URL]”

    • “Source: Bloomberg Terminal, 8/15/2025, AAPL US Equity”

    • “Source: FactSet, 8/20/2025, Consensus Estimates Screen”

XLSX creation, editing, and analysis

Overview

A user may ask you to create, edit, or analyze the contents of an .xlsx file. You have different tools and workflows available for different tasks.

Important Requirements

LibreOffice Required for Formula Recalculation: You can assume LibreOffice is installed for recalculating formula values using the scripts/recalc.py script. The script automatically configures LibreOffice on first run, including in sandboxed environments where Unix sockets are restricted (handled by scripts/office/soffice.py)

Reading and analyzing data

Data analysis with pandas

For data analysis, visualization, and basic operations, use pandas which provides powerful data manipulation capabilities:

import pandas as pd

# Read Excel
df = pd.read_excel('file.xlsx')  # Default: first sheet
all_sheets = pd.read_excel('file.xlsx', sheet_name=None)  # All sheets as dict

# Analyze
df.head()      # Preview data
df.info()      # Column info
df.describe()  # Statistics

# Write Excel
df.to_excel('output.xlsx', index=False)

Excel File Workflows

CRITICAL: Use Formulas, Not Hardcoded Values

Always use Excel formulas instead of calculating values in Python and hardcoding them. This ensures the spreadsheet remains dynamic and updateable.

:cross_mark: WRONG - Hardcoding Calculated Values

# Bad: Calculating in Python and hardcoding result
total = df['Sales'].sum()
sheet['B10'] = total  # Hardcodes 5000

# Bad: Computing growth rate in Python
growth = (df.iloc[-1]['Revenue'] - df.iloc[0]['Revenue']) / df.iloc[0]['Revenue']
sheet['C5'] = growth  # Hardcodes 0.15

# Bad: Python calculation for average
avg = sum(values) / len(values)
sheet['D20'] = avg  # Hardcodes 42.5

:white_check_mark: CORRECT - Using Excel Formulas

# Good: Let Excel calculate the sum
sheet['B10'] = '=SUM(B2:B9)'

# Good: Growth rate as Excel formula
sheet['C5'] = '=(C4-C2)/C2'

# Good: Average using Excel function
sheet['D20'] = '=AVERAGE(D2:D19)'

This applies to ALL calculations - totals, percentages, ratios, differences, etc. The spreadsheet should be able to recalculate when source data changes.

Common Workflow

  1. Choose tool: pandas for data, openpyxl for formulas/formatting

  2. Create/Load: Create new workbook or load existing file

  3. Modify: Add/edit data, formulas, and formatting

  4. Save: Write to file

  5. Recalculate formulas (MANDATORY IF USING FORMULAS): Use the scripts/recalc.py script

    python scripts/recalc.py output.xlsx
    
    
  6. Verify and fix any errors:

    • The script returns JSON with error details

    • If status is errors_found, check error_summary for specific error types and locations

    • Fix the identified errors and recalculate again

    • Common errors to fix:

      • #REF!: Invalid cell references

      • #DIV/0!: Division by zero

      • #VALUE!: Wrong data type in formula

      • #NAME?: Unrecognized formula name

Creating new Excel files

# Using openpyxl for formulas and formatting
from openpyxl import Workbook
from openpyxl.styles import Font, PatternFill, Alignment

wb = Workbook()
sheet = wb.active

# Add data
sheet['A1'] = 'Hello'
sheet['B1'] = 'World'
sheet.append(['Row', 'of', 'data'])

# Add formula
sheet['B2'] = '=SUM(A1:A10)'

# Formatting
sheet['A1'].font = Font(bold=True, color='FF0000')
sheet['A1'].fill = PatternFill('solid', start_color='FFFF00')
sheet['A1'].alignment = Alignment(horizontal='center')

# Column width
sheet.column_dimensions['A'].width = 20

wb.save('output.xlsx')

Editing existing Excel files

# Using openpyxl to preserve formulas and formatting
from openpyxl import load_workbook

# Load existing file
wb = load_workbook('existing.xlsx')
sheet = wb.active  # or wb['SheetName'] for specific sheet

# Working with multiple sheets
for sheet_name in wb.sheetnames:
    sheet = wb[sheet_name]
    print(f"Sheet: {sheet_name}")

# Modify cells
sheet['A1'] = 'New Value'
sheet.insert_rows(2)  # Insert row at position 2
sheet.delete_cols(3)  # Delete column 3

# Add new sheet
new_sheet = wb.create_sheet('NewSheet')
new_sheet['A1'] = 'Data'

wb.save('modified.xlsx')

Recalculating formulas

Excel files created or modified by openpyxl contain formulas as strings but not calculated values. Use the provided scripts/recalc.py script to recalculate formulas:

python scripts/recalc.py <excel_file> [timeout_seconds]

Example:

python scripts/recalc.py output.xlsx 30

The script:

  • Automatically sets up LibreOffice macro on first run

  • Recalculates all formulas in all sheets

  • Scans ALL cells for Excel errors (#REF!, #DIV/0!, etc.)

  • Returns JSON with detailed error locations and counts

  • Works on both Linux and macOS

Formula Verification Checklist

Quick checks to ensure formulas work correctly:

Essential Verification

  • [ ] Test 2-3 sample references: Verify they pull correct values before building full model

  • [ ] Column mapping: Confirm Excel columns match (e.g., column 64 = BL, not BK)

  • [ ] Row offset: Remember Excel rows are 1-indexed (DataFrame row 5 = Excel row 6)

Common Pitfalls

  • [ ] NaN handling: Check for null values with pd.notna()

  • [ ] Far-right columns: FY data often in columns 50+

  • [ ] Multiple matches: Search all occurrences, not just first

  • [ ] Division by zero: Check denominators before using / in formulas (#DIV/0!)

  • [ ] Wrong references: Verify all cell references point to intended cells (#REF!)

  • [ ] Cross-sheet references: Use correct format (Sheet1!A1) for linking sheets

Formula Testing Strategy

  • [ ] Start small: Test formulas on 2-3 cells before applying broadly

  • [ ] Verify dependencies: Check all cells referenced in formulas exist

  • [ ] Test edge cases: Include zero, negative, and very large values

Interpreting scripts/recalc.py Output

The script returns JSON with error details:

{
  "status": "success",           // or "errors_found"
  "total_errors": 0,              // Total error count
  "total_formulas": 42,           // Number of formulas in file
  "error_summary": {              // Only present if errors found
    "#REF!": {
      "count": 2,
      "locations": ["Sheet1!B5", "Sheet1!C10"]
    }
  }
}

Best Practices

Library Selection

  • pandas: Best for data analysis, bulk operations, and simple data export

  • openpyxl: Best for complex formatting, formulas, and Excel-specific features

Working with openpyxl

  • Cell indices are 1-based (row=1, column=1 refers to cell A1)

  • Use data_only=True to read calculated values: load_workbook('file.xlsx', data_only=True)

  • Warning: If opened with data_only=True and saved, formulas are replaced with values and permanently lost

  • For large files: Use read_only=True for reading or write_only=True for writing

  • Formulas are preserved but not evaluated - use scripts/recalc.py to update values

Working with pandas

  • Specify data types to avoid inference issues: pd.read_excel('file.xlsx', dtype={'id': str})

  • For large files, read specific columns: pd.read_excel('file.xlsx', usecols=['A', 'C', 'E'])

  • Handle dates properly: pd.read_excel('file.xlsx', parse_dates=['date_column'])

Code Style Guidelines

IMPORTANT: When generating Python code for Excel operations:

  • Write minimal, concise Python code without unnecessary comments

  • Avoid verbose variable names and redundant operations

  • Avoid unnecessary print statements

For Excel files themselves:

  • Add comments to cells with complex formulas or important assumptions

  • Document data sources for hardcoded values

  • Include notes for key calculations and model sections

hi @sjayawant

do the above responses solve your problem or do you still need some guidance?