AI/ML Engineer

Location US-TX-Austin
Job ID
2026-7797
# Positions
1
Category
Applications/Software Development
Deadline Date
4/20/2026
Duration (Hours)
2080
Duration (Months)
12
Visa Restrictions
Authorized to work in the US

Overview

Texas GovLink, Inc. is an Austin-based firm which has been a leading provider of technical and business professionals to clients in Texas. We are currently seeking an experienced AI/ML Engineer to be a key resource on a technical services team.

 

Texas GovLink offers its family of consultants excellent rates, a local support staff, and an attractive benefits package which includes medical insurance (TGL shares a percentage of the cost), life insurance, a matching 401(k) plan and a cafeteria plan.

Candidates selected for interview will be required to undergo criminal background checks and may be required to complete a drug screen in accordance with Federal and State Law.  Offers of Employment are contingent on a successful background check

Texas GovLink is an equal opportunities employer.

Responsibilities

The AI Innovation team in ITD undertakes rapid development initiatives using AI/ML tools and platforms with the goal to drive business innovation at the Texas Department of Transportation.

  • Evaluate emerging AI trends, tools, and vendor solutions against business use-cases.
  • Run proof-of-concepts (PoCs) to test feasibility of new ideas.
  • Design and build applications and AI/ML models tailored to specific use cases (e.g., predictive analytics, natural language processing, computer vision) prioritized for the AI Program.
  • Create scalable AI pipelines that can be integrated into existing systems.
  • Collaborate with data, engineering, and software development teams.

Qualifications

Minimum (Required):

Years

Skills/Experience

2

Strong Python, familiarity with Java / C++ / Go for production environments

2

Object-oriented programming & design patterns

2

Unit testing, CI/CD, Git, containerization (Docker)

2

Data pipelines (Airflow, Prefect, or cloud-native equivalents)

2

Model deployment (REST APIs, gRPC, serverless), monitoring, and versioning

2

Cloud-native training/inference environments (SageMaker, Vertex AI, Azure ML)

2

Kubernetes for scalable inference

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed

Need help finding the right job?

We can recommend jobs specifically for you! Click here to get started.