Skip to content

James Allman / JA Technology Solutions LLC

ETL and data pipelines that move your business forward

Every time data moves between systems — whether it is a one-time migration or a process that runs every night — there is an ETL pipeline behind it. I build the ones that work reliably and the ones that replace the ones that do not.

Two Kinds of ETL

One-time ETL handles data migration: extracting data from a legacy system, transforming it to fit the new platform, and loading it with validation and reconciliation. This is project work with a defined end, but it requires deep understanding of both the source and destination systems.

Permanent ETL handles ongoing integration: data that moves between systems on a schedule or in real time. Inventory updates, pricing synchronization, POS item file audits, financial reporting feeds, wholesale distributor exchanges. These pipelines run continuously and must handle errors, retries, and data quality issues gracefully.

What ETL Work May Include

  • Data extraction from IBM i, SQL Server, PostgreSQL, Informix, Access, and other sources
  • Data transformation including format conversion, validation, enrichment, and cleansing
  • Loading into target systems with reconciliation and error handling
  • Cross-platform database connectivity via JDBC, ODBC, and native drivers
  • File format transformation (fixed-width, delimited, XML, JSON, EDI)
  • Scheduled and event-driven pipeline execution
  • Monitoring, alerting, and recovery for production pipelines
  • Documentation of data flows for audit and compliance

Why ETL Fails

ETL failures are rarely about the technology. They happen when the people building the pipeline do not understand the business meaning of the data, do not account for edge cases in the source system, or do not build in validation that catches problems before they propagate.

I bring decades of experience working with the kinds of systems that ETL connects — merchandising platforms, warehouse systems, financial applications, ERP systems. Understanding both sides of the data exchange is what makes the pipeline reliable.

Related Capabilities

ETL work connects closely with migration, system integration, database development, and EDI processing.

Mainframe & Legacy Data Extraction

ETL from mainframe and IBM i systems requires deep understanding of EBCDIC encoding, packed decimal fields, zoned decimal, fixed-width record layouts, and COBOL copybook definitions. I build extraction pipelines that handle these format conversions accurately — preserving data integrity through every transformation step from legacy source to modern target.

Free ETL Tools

Data engineers building or troubleshooting pipelines can use these free browser-based tools for the everyday extraction, validation, and format-conversion tasks that come up during ETL work — none of them require uploading your data to a server.

Further Reading

ETL: The Invisible Backbone of Enterprise Data — a deeper look at how ETL works, why it fails, and how to build it right.

Try the free data transformation tools — file converter, data profiler, data diff, and format converters for CSV, Excel, JSON, and more. Browse free tools →