<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[The Parselfinger]]></title><description><![CDATA[The Parselfinger]]></description><link>https://theparselfinger.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 02:04:45 GMT</lastBuildDate><atom:link href="https://theparselfinger.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[How I Built a Serverless Expense Tracker Using AWS and Generative AI]]></title><description><![CDATA[Introduction
I've always struggled with tracking my expenses. This year, I resolved to improve by reviewing my bank statements each month. However, this quickly became overwhelming for two main reasons. First, like most people, I have multiple bank a...]]></description><link>https://theparselfinger.com/how-i-built-a-serverless-expense-tracker-using-aws-and-generative-ai</link><guid isPermaLink="true">https://theparselfinger.com/how-i-built-a-serverless-expense-tracker-using-aws-and-generative-ai</guid><category><![CDATA[AWS]]></category><category><![CDATA[lambda]]></category><category><![CDATA[SES]]></category><category><![CDATA[S3]]></category><category><![CDATA[etl-pipeline]]></category><category><![CDATA[Google Gemini AI]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Samuel Adekoya]]></dc:creator><pubDate>Sun, 12 Oct 2025 21:43:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/zR7nFjjIAWE/upload/316586036e702a7986ee3ed82d979c18.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>I've always struggled with tracking my expenses. This year, I resolved to improve by reviewing my bank statements each month. However, this quickly became overwhelming for two main reasons. First, like most people, I have multiple bank accounts. Second, some of my banks require me to manually request statements, which is not only inconvenient but also easy to forget.</p>
<p>Additionally, my transaction emails are scattered across various inboxes, and when statements finally arrive, they often come in different formats. Before I could start analyzing my spending, I was already spending too much time on data collection.</p>
<p>What I really wanted was a single automated system that could:</p>
<ol>
<li><p>Collect all transaction alerts across my banks in one place.</p>
</li>
<li><p>Extract the actual transaction details (amount, description, date).</p>
</li>
<li><p>Store them in a structured format.</p>
</li>
<li><p>Generate a monthly spending report without requiring any manual work from me.</p>
</li>
</ol>
<p>After researching and failing to find an existing (and free) solution that met all my requirements, I decided to build one myself. The idea I came up with was to create a pipeline that automatically forwards my transaction emails to a central location. From there, a serverless function is triggered to use Generative AI to extract transaction details, store them in a database, and then, every month, another serverless function generates a spending report. I built the system using AWS SES, Amazon S3, Gemini AI, and DynamoDB.</p>
<h2 id="heading-architectural-overview">Architectural Overview</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757518923463/9e506e1a-483c-439f-8830-9c7207544959.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-by-step-implementation">Step-by-Step Implementation</h2>
<h3 id="heading-1-setting-up-a-domain-identity">1. Setting Up a Domain Identity</h3>
<p>The first step was to direct all transactional emails to a central inbox so that my Lambda function could process them efficiently. Amazon SES (Simple Email Service) was perfect for this task because it can receive emails and apply rules to determine how each message should be handled. However, before SES can send or receive emails on your behalf, it must verify your domain identity. Domain verification involves adding specific DNS records that prove ownership of your domain, while enabling DomainKeys Identified Mail (DKIM) adds an extra layer of authentication to ensure that outgoing messages are trusted and not tampered with.</p>
<p>Since I already had a registered domain, I decided to verify it directly. From the AWS console, I navigated to <strong>SES → Verified Identities</strong> and created a new identity using the <strong>domain</strong> option. SES then generated a set of <strong>CNAME records</strong> that needed to be added to my domain’s DNS configuration. I logged into my domain provider and published the records exactly as provided. After a short wait for DNS propagation, SES marked the domain as verified. This verification allowed SES to handle email authentication via DKIM, enabling me to send and receive messages securely from my own domain.</p>
<p>The next step was to set up an <strong>MX record</strong> so Amazon SES could begin receiving incoming emails. The MX record essentially tells other mail servers that AWS is authorized to accept mail for your domain. In my case, within <strong>Namecheap’s DNS settings</strong>, I created a new MX record with the host set to <code>ses</code> and the value pointing to <code>inbound-smtp.&lt;aws-region&gt;.</code><a target="_blank" href="http://amazonaws.com"><code>amazonaws.com</code></a>. Once saved, any email sent to an address under the subdomain <a target="_blank" href="http://ses.mydomain.com"><code>ses.mydomain.com</code></a> would be routed directly to SES, ready to be processed and passed along to my Lambda function later on.</p>
<h3 id="heading-ia"> </h3>
<p>2. Receipt Rules</p>
<p>With SES now able to receive emails for my domain, the next step was to configure <strong>receipt rules</strong>. A <strong>receipt rule</strong> defines how incoming mail is handled based on specific recipient conditions that you set.</p>
<p>I started by creating a <strong>rule set</strong>, which is simply a collection of individual receipt rules. Within that set, I added a new rule and specified a <strong>recipient condition</strong> — this could be a domain, subdomain, or specific email address. If no recipient condition is defined, SES will process any email sent to the subdomain (e.g., anything@ses.mydomain.com), which works but isn’t ideal for this use case. To narrow it down, I configured the rule to only trigger for emails sent to a specific subdomain, e.g <strong>transactions@ses.mydomain.com</strong>.</p>
<p>Once the condition was set, I added two <strong>actions</strong> to the rule:</p>
<ol>
<li><p><strong>Store the incoming email in an S3 bucket</strong>.</p>
</li>
<li><p><strong>Invoke a Lambda function</strong> — the Lambda picks up the email from the bucket and processes it to extract transaction details like amount, merchant, and description.</p>
</li>
</ol>
<h3 id="heading-3-email-forwarding">3. Email Forwarding</h3>
<p>With the receiving pipeline in place, I needed a way to automatically forward <strong>transaction alert emails</strong> from my bank to Amazon SES. Fortunately, Gmail makes this easy with its built-in <strong>filters and forwarding rules</strong>.</p>
<p>For each bank I receive alerts from, I created a Gmail filter that detects transaction emails based on keywords in the subject or sender address. I then set the filter to <strong>forward those emails</strong> to the same address defined in my SES receipt rule.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760014070234/39c49df3-c32b-441f-b37c-f23e7f63fe79.png" alt class="image--center mx-auto" /></p>
<p>Now, every time a new transaction alert arrives in my inbox, Gmail automatically forwards it to SES, which stores it in S3 and triggers my Lambda function for processing.</p>
<h3 id="heading-4-processing-transactions">4. Processing Transactions</h3>
<p>Once Amazon SES delivers an incoming email to S3, the Lambda function is automatically triggered with an event that contains metadata about the message. The first step inside the function is to fetch the actual email file from S3 so it can be processed. Each record in the event includes a messageId, which corresponds to the S3 object key where SES stored the raw email:</p>
<pre><code class="lang-python"><span class="hljs-keyword">for</span> record <span class="hljs-keyword">in</span> event.get(<span class="hljs-string">"Records"</span>, []):
    ses_message = record.get(<span class="hljs-string">"ses"</span>, {}).get(<span class="hljs-string">"mail"</span>, {})
    message_id = ses_message.get(<span class="hljs-string">"messageId"</span>)

    raw_email = s3_client.get_object(Bucket=TXN_EMAILS_BUCKET_NAME, Key=message_id)[<span class="hljs-string">"Body"</span>].read()
</code></pre>
<p>This snippet retrieves the raw email file (stored in MIME format) as a byte stream. The MIME structure contains everything — headers, sender, subject, and multiple body parts (plain text, HTML, or even attachments).</p>
<p>To make sense of it, the Lambda uses Python’s built-in email library, which can parse these complex message structures into something readable.</p>
<pre><code class="lang-python">msg = BytesParser(policy=policy.SMTP).parsebytes(raw_email)
body = msg.get_body(preferencelist=(<span class="hljs-string">"plain"</span>, <span class="hljs-string">"html"</span>))
</code></pre>
<p>This ensures that no matter how the bank formats the email, the function picks the most human-readable part, either plain text or HTML. The extracted text is then stored in msg_body and becomes the foundation for the next step: AI-powered parsing.</p>
<p>To interpret and structure the transaction data, the function relies on <strong>Gemini 1.5 Flash</strong>, Google’s generative AI model. Gemini takes the raw message text and converts it into a clean JSON object based on a custom prompt that defines the schema and extraction rules.</p>
<p>That prompt contains a detailed instruction block like this:</p>
<pre><code class="lang-plaintext">Extract transaction details from the following message: {msg}

Schema Definition:
{
    "amount": "number - The monetary value of the transaction (required)",
    "transactionType": "string - Either 'credit' or 'debit' (required)",
    "paymentMethod": "string - Method of payment (e.g., 'cash', 'card', 'bank transfer', 'check') (required)",
    "date": "string - Transaction date in ISO 8601 format (YYYY-MM-DD) (required)",
    "description": "string - Brief description of the transaction (required)",
    "category": "string - Transaction category (e.g., 'food', 'transport', 'utilities') (optional)",
    "merchant": "string - Name of the merchant/recipient (optional)"
}

Instructions:
1. Extract all required fields from the message
2. Convert amounts to numerical values (e.g., "₦20.50" → 20.50)
3. Standardize dates to ISO 8601 format
4. If a required field cannot be determined from the message, use null
5. Use the most appropriate category based on the description
6. Clean and standardize merchant names when available

Examples:
Input: "Spent 25.99 at Walmart on groceries yesterday using my debit card"
Output: {
    "amount": 25.99,
    "transactionType": "debit",
    "paymentMethod": "card",
    "date": "2024-03-19",  // Assuming today is 2024-03-20
    "description": "Grocery purchase at Walmart",
    "category": "food",
    "merchant": "Walmart"
}

Input: "Received $500 bank transfer from John for rent"
Output: {
    "amount": 500.00,
    "transactionType": "credit",
    "paymentMethod": "bank transfer",
    "date": "2024-03-20",  // Assuming today's date
    "description": "Rent payment from John",
    "category": "housing",
    "merchant": "John"
}

Error Handling:
- If the message is unclear or ambiguous, provide best-effort extraction
- For missing required fields, use null instead of empty strings
- For amounts in foreign currency, convert to default currency if possible
- For dates without a year, assume the current or most recent year

Return the extracted details in valid JSON format.
</code></pre>
<p>When the Lambda runs, it loads this prompt file, replaces {msg} with the actual email content, and sends the combined text to Gemini for processing:</p>
<pre><code class="lang-python">prompt = prompt.replace(<span class="hljs-string">"{msg}"</span>, str(msg_body))
response = model.generate_content(prompt)
</code></pre>
<p>Gemini responds with a JSON block that represents the extracted transaction. To safely isolate that data from the rest of the response text, the function looks for JSON wrapped in triple backticks:</p>
<pre><code class="lang-python">json_match = re.search(<span class="hljs-string">r"```json\s*([\s\S]+?)\s*```"</span>, response.text, re.MULTILINE)
</code></pre>
<p>Once a match is found, the JSON is cleaned, parsed, and converted into a Python dictionary.</p>
<p>At the end of this step, the pipeline transforms each raw email message into clean, structured, and queryable data. For example, a transaction alert is parsed into a standardized format like:</p>
<pre><code class="lang-python">{
  <span class="hljs-string">"amount"</span>: <span class="hljs-number">12500.00</span>,
  <span class="hljs-string">"transactionType"</span>: <span class="hljs-string">"debit"</span>,
  <span class="hljs-string">"paymentMethod"</span>: <span class="hljs-string">"card"</span>,
  <span class="hljs-string">"date"</span>: <span class="hljs-string">"2025-10-05"</span>,
  <span class="hljs-string">"description"</span>: <span class="hljs-string">"Purchase at Ebeano Supermarket"</span>,
  <span class="hljs-string">"category"</span>: <span class="hljs-string">"groceries"</span>,
  <span class="hljs-string">"merchant"</span>: <span class="hljs-string">"Ebeano Supermarket"</span>
}
</code></pre>
<p>The final step is to persist this structured data to <strong>DynamoDB</strong>. After parsing, the function connects to DynamoDB via the <code>boto3</code> interface:</p>
<pre><code class="lang-python">dynamodb = boto3.resource(<span class="hljs-string">"dynamodb"</span>)
table = dynamodb.Table(TXN_TABLE_NAME)
table.put_item(Item={<span class="hljs-string">"message_id"</span>: message_id, **data})
</code></pre>
<p>The message_id from SES serves as a unique identifier to prevent duplicate inserts if SES ever retries delivery. This ensures idempotency in the pipeline.</p>
<h3 id="heading-5-generating-monthly-reports">5. Generating Monthly Reports</h3>
<p>Now that the pipeline is complete, I created another Lambda function that automatically generates a <strong>monthly spending report</strong> in PDF format. This report summarizes the previous month's activity, including total income, total expenses, net income, and top spending categories and merchants. It also visualizes spending patterns with <strong>pie and bar charts</strong>, then uploads the finished PDF to an S3 bucket for long-term storage.</p>
<p><strong>5.1 Overview</strong></p>
<p>This Lambda function is automatically triggered by an <strong>Amazon EventBridge Scheduler</strong> (formerly known as CloudWatch Events). The EventBridge rule uses a <strong>cron expression</strong> to call the function at the start of each month. For example:</p>
<pre><code class="lang-python">cron(<span class="hljs-number">0</span> <span class="hljs-number">6</span> <span class="hljs-number">1</span> * ? *)
</code></pre>
<p>This means:</p>
<ul>
<li><p>0 6 → run at 06:00 UTC,</p>
</li>
<li><p>1 → on the 1st day of every month</p>
</li>
<li><p>* ? * → regardless of day of week or year.</p>
</li>
</ul>
<p><strong>5.2 Fetching and Filtering Transactions</strong></p>
<p>The function starts by connecting to DynamoDB and retrieving all stored transactions:</p>
<pre><code class="lang-python">dynamodb = boto3.resource(<span class="hljs-string">"dynamodb"</span>, region_name=REGION)
table = dynamodb.Table(TXN_TABLE_NAME)
response = table.scan()
items = response[<span class="hljs-string">"Items"</span>]
</code></pre>
<p>If DynamoDB pagination is enabled, the function continues scanning until all items are collected.</p>
<p>Next, it determines the previous month based on the current date:</p>
<pre><code class="lang-python">current_date = datetime.now()
<span class="hljs-keyword">if</span> current_date.month == <span class="hljs-number">1</span>:
    previous_month = <span class="hljs-string">f"<span class="hljs-subst">{current_date.year - <span class="hljs-number">1</span>}</span>-12"</span>
<span class="hljs-keyword">else</span>:
    previous_month = <span class="hljs-string">f"<span class="hljs-subst">{current_date.year}</span>-<span class="hljs-subst">{current_date.month - <span class="hljs-number">1</span>:<span class="hljs-number">02</span>d}</span>"</span>
</code></pre>
<p><strong>5.3 Aggregating Data</strong></p>
<p>Once the transactions are filtered, the Lambda aggregates key financial metrics and spending breakdowns:</p>
<pre><code class="lang-python">total_income = <span class="hljs-number">0.0</span>
total_expenses = <span class="hljs-number">0.0</span>
categories = defaultdict(float)
merchants = defaultdict(float)

<span class="hljs-keyword">for</span> txn <span class="hljs-keyword">in</span> previous_month_transactions:
    amount = parse_amount(txn[<span class="hljs-string">"amount"</span>])
    <span class="hljs-keyword">if</span> txn[<span class="hljs-string">"transactionType"</span>].lower() == <span class="hljs-string">"credit"</span>:
        total_income += amount
    <span class="hljs-keyword">else</span>:
        total_expenses += abs(amount)
    categories[txn[<span class="hljs-string">"category"</span>]] += abs(amount)
    merchants[txn[<span class="hljs-string">"merchant"</span>]] += abs(amount)
</code></pre>
<p>It then builds a summary dictionary that captures both numerical and categorical insights:</p>
<pre><code class="lang-python">report = {
    <span class="hljs-string">"month"</span>: previous_month,
    <span class="hljs-string">"summary"</span>: {
        <span class="hljs-string">"total_transactions"</span>: len(previous_month_transactions),
        <span class="hljs-string">"total_income"</span>: total_income,
        <span class="hljs-string">"total_expenses"</span>: total_expenses,
        <span class="hljs-string">"net_income"</span>: total_income - total_expenses,
    },
    <span class="hljs-string">"top_categories"</span>: sorted(categories.items(), key=<span class="hljs-keyword">lambda</span> x: x[<span class="hljs-number">1</span>], reverse=<span class="hljs-literal">True</span>)[:<span class="hljs-number">10</span>],
    <span class="hljs-string">"top_merchants"</span>: sorted(merchants.items(), key=<span class="hljs-keyword">lambda</span> x: x[<span class="hljs-number">1</span>], reverse=<span class="hljs-literal">True</span>)[:<span class="hljs-number">10</span>],
    <span class="hljs-string">"transactions"</span>: previous_month_transactions,
}
</code></pre>
<p><strong>5.4 Creating Charts and the PDF Report</strong></p>
<p>The Lambda function uses the ReportLab library to create a well-formatted PDF that includes both data tables and visual charts. It generates two charts: a pie chart that shows spending by category and a bar chart that highlights the top merchants.</p>
<pre><code class="lang-python">categories_dict = dict(report[<span class="hljs-string">"top_categories"</span>][:<span class="hljs-number">8</span>])
pie_chart = create_pie_chart(categories_dict, <span class="hljs-string">"Spending by Category"</span>)

merchants_dict = dict(report[<span class="hljs-string">"top_merchants"</span>][:<span class="hljs-number">8</span>])
bar_chart = create_bar_chart(merchants_dict, <span class="hljs-string">"Spending by Merchant"</span>)
</code></pre>
<p>Finally, the PDF is generated and written to the Lambda’s temporary directory:</p>
<pre><code class="lang-python">output_path = Path(<span class="hljs-string">"/tmp/reports"</span>)
pdf_file = output_path / <span class="hljs-string">f"transaction_report_<span class="hljs-subst">{previous_month}</span>.pdf"</span>
generate_monthly_pdf_report(previous_month, report, report[<span class="hljs-string">"top_categories"</span>], report[<span class="hljs-string">"top_merchants"</span>], pdf_file)
</code></pre>
<p><strong>5.5 Uploading the Report to S3</strong></p>
<p>After generating the report, it’s uploaded to an S3 bucket for long-term storage:</p>
<pre><code class="lang-python">s3_client.upload_file(
    str(pdf_file),
    s3_bucket,
    <span class="hljs-string">f"monthly_reports/<span class="hljs-subst">{previous_month}</span>/transaction_report_<span class="hljs-subst">{previous_month}</span>.pdf"</span>
)
</code></pre>
<p><strong>5.6 Lambda Entry Point</strong></p>
<p>The function entry point ties everything together:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">lambda_handler</span>(<span class="hljs-params">event, context</span>):</span>
    dynamodb = boto3.resource(<span class="hljs-string">"dynamodb"</span>, region_name=REGION)
    table = dynamodb.Table(TXN_TABLE_NAME)
    reports_s3_bucket = os.environ.get(<span class="hljs-string">"REPORTS_S3_BUCKET"</span>)
    output_dir = <span class="hljs-string">"/tmp/reports"</span>

    generate_monthly_report(
        table,
        output_dir,
        s3_bucket=reports_s3_bucket,
        s3_prefix=<span class="hljs-string">""</span>
    )

    <span class="hljs-keyword">return</span> {<span class="hljs-string">"statusCode"</span>: <span class="hljs-number">200</span>, <span class="hljs-string">"body"</span>: <span class="hljs-string">"Monthly report generated successfully"</span>}
</code></pre>
<p>At the end of this process, the application produces a clean, structured PDF summary of transactions, categorized and formatted for easy review. Below is an example of the generated PDF report (not real data):</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760121941190/48d0fd5a-b8e2-4056-92ab-951325142c2c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760121933740/f3efefef-7b50-44fe-a3eb-6f639f573919.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Building this project was a fun and fulfilling experience. What began as a simple idea to automate how I track my expenses gradually evolved into a complete, serverless ecosystem powered by AWS and Generative AI. I’ve been using the system for a few months now, and while it’s occasionally unsettling to face the raw numbers, it has also made me more intentional and responsible with my finances.</p>
<p>There's still room for improvement, especially in refining how transactions are categorized. For example, POS transactions often lack detailed descriptions, making them harder to classify accurately. One idea I'm considering is adding a Telegram bot that asks me to provide short descriptions right after these transactions happen. This small human-in-the-loop step would enrich the data and make the insights even more reliable.  </p>
<p>If you’d like to explore the code or try building something similar yourself, the lambda implementation is <a target="_blank" href="https://github.com/parselfinger/pennywise">available here</a>,</p>
]]></content:encoded></item></channel></rss>