tc
is a graph-based, stateless, serverless application & infrastructure composer.
tc
defines, creates and manages serveless entities such as functions, mutations, events, routes, states, queues and channels. tc compiles a tree of entities defined in the filesystem as a topology. This composable, namespaced, sandboxed, recursive, versioned and isomorphic topology is called a Cloud Functor
.
The word functor was popularized by Ocaml's parameterized modules. These modules, called functors, are first class. Cloud functors are similar in that they are treated as first class and are composable much like Ocaml's elegant modules.
Why tc ?
tc is a tool that empowers developers, architects and release engineers to build a serverless system that is simple to define and easy to evolve.
- Developers should not be drowning in permissions and provider-specific (AWS, GCP etc) services. Instead, tc provides a framework for developers to focus on domain-specific functions and abstract entities.
- Architects should not be defining systems that are hard to implement or disconnected from the system definition. Instead, tc's topology is the system definition. The definition is representative of the entire system to a large extent with most of it inferred by tc.
- Release engineers should not be managing manifests manually. Instead, tc provides a mechanism to deploy collection of namespaced topologies as an atomic unit. Canarys, A/B testing and rollbacks are much simpler to configure using tc
Key features of functors using tc
1. Composable Entities
At it's core, tc
provides 7 entities (functions, events, mutations, queues, routes, states and channels) that are agnostic to any cloud provider. These entities are core primitives to define the topology of any serverless system. For example, consider the following topology definition:
name: example
routes:
myposts:
path: /api/posts
method: GET
function: bar
event: MyEvent
events:
consumes:
MyEvent:
function: foo
channel: room1
channels:
room1:
handler: default
functions:
remote:
foo: github.com/bar/bar
local:
bar: ./bar
Now, /api/posts
route calls function bar
and generates an event MyEvent
which are handled by functions that are locally defined (subdirectories) or remote (git repos). In this example, the event finally triggers a channel notification with the event's payload. We just defined the flow without specifying anything about infrastructure, permissions or the provider. This definition is good enough to render it in the cloud as services, as architecture diagrams and release manifests.
tc compile
maps these entities to the provider's serverless constructs. If the provider is AWS (default), tc maps routes
to API Gateway, events to Eventbridge
, functions
to either Lambda
or ECS Fargate
, channels
to Appsync Events
, mutations
to Appsync Graphql
and queues
to SQS
2. Namespacing
If we run tc compile
in the directory containing the above topology (topology.yml), we see that all the entities are namespaced. This implies there is room for several foo
,bar
or MyEvent
entities in another topology. This also encourages developers to name the entities succinctly similar to function names in a module. With namespacing comes the benefit of having a single version of the namespace and thereby avoiding the need to manage the versions of sub-components.
3. Sandboxing
You can create a sandbox of this topology in the cloud (AWS is the default provider) using
tc create -s <sandbox-name> -e <aws-profile>
and can invoke (tc invoke -s sandbox -e env -p payload.json
) this topology. This sandbox is also versioned and we can update specific entities or components in it. Sandboxing is fundamental to canary-based routing and deploys. tc create
also knows how to build the functions, implicitly, for various language runtimes.
tc update -s sandbox -e env -c events|routes|mutations|functions|flow
4. Inference
tc compile
generates a lot of the infrastructure (permissions, default configurations etc) boilerplate needed for the configured provider. Think of infrastructure as Types in a dynamic programming language. We can override the defaults or inferred configurations separate from the topology definition. For example we can have a repository layout as follows:
services/<topology>/<function>
infrastructure/<topology>/vars/<function>.json
infrastructure/<topology>/roles/<function>.json
This encourages developers to not leak infrastructure into domain-specific code or topology definition and vice versa. A topology definition could be rendered in with different infrastructure providers.
5. Recursive Topology
Functors can be created at any level in the code repository's heirarchy. They are like fractals where we can zoom in or out. For example, consider the following retail order management topology:
order/
|-- payment
| |-- other-payment-processor
| | `-- handler.py
| |-- stripe
| | |-- handler
| | `-- topology.yml
| `-- topology.yml
`-- topology.yml
There are two sub-topologies in the root topology. order
, payment
and stripe
are valid topologies. tc
can create and manage sandboxes at any level preserving the integrity of the overall graph.
cd order
tc create -s <sandbox> -e <env> --recursive
This feature helps evolve the system and test individual nodes in isolation.
6. Isomorphic Topology
The output of tc compile
is a self-contained, templated topology (or manifest) that can be rendered in any sandbox. The template variables are specific to the provider, sandbox and configuration. When we create (tc create
) the sandbox with this templated topology, it implicitly resolves it by querying the provider. We can write custom resolvers to resolve these template variables by querying the configured provider (AWS, GCP etc).
tc compile | tc resolve -s sandbox -e env | tc create
We can replace the resolver with sed
or a template renderer with values from ENV variables, SSM parameter store, Vault etc. For example:
tc compile | sed -i 's/{{API_GATEWAY}}/my-gateway/g' |tc create
The resolver can also be written in any language that is easy to use and query the provider, efficiently. The output of the compiler, the resolver and the sandbox's metadata as seen above are isomorphic. They are structurally the same and can be diffed like git-diff. Diffable infrastructure without having external state is a simple yet powerful feature.
This is all too abstract you say ? It is! Let's Get Started
Installation
Download the executable for your OS
Allow tc in Privacy & Security
The first time you run the downloaded executable you will get a popup that says it may be "malicious software"
Do the following:
- Go to
Privacy & Security
panel to theSecurity/Settings
section - Should have
App Store and identified developers
selected - Where it says
tc was blocked from use becasue it is not from an identified developer
- Click on
Allow Anyway
- Click on
mv ~/Downloads/tc /usr/local/bin/tc
chmod +x /usr/local/bin/tc
Building your own
tc
is written in Rust.
If you prefer to build tc
yourself, install rustc/cargo.
Install Cargo/Rust https://www.rust-lang.org/tools/install
cd tc
cargo build --release
sudo mv target/release/tc /usr/local/bin/tc
Getting started
- 1. Bootstrap permissions
- 2. Our first function
- 3. Namespace your functions
- 4. Define the function DAG (flow)
- 5. Add a REST API to invoke ETL
- 6. Notify on completion
- 7. Implement the functions
- 8. Making it recursive
- 9. Configuring infrastructure
Caveat: this is a rough draft and we are still working on the documentation.
Now that we have installed tc
and understood the features in abstract, let's try to walk through a basic tutorial of creating an ETL (Enhance-Transform-Load) flow using serverless entities.
In this tutorial, we will attempt to learn about the core concepts in tc.
1. Bootstrap permissions
Let's create some base IAM roles and policies for your sandbox. tc
maps environments to AWS profiles. There can be several sandboxes per environment/account. For the sake of this example, let's say we have a profile called dev
. This dev profile/account can have several dev sandboxes. Let's name our sandbox john
.
tc create -s john -e dev -c base-roles
2. Our first function
A simple function looks like this. Let's call this function enhancer
. Add a file named handler.py
in a directory etl/enhancer.
etl/enhancer/handler.py:
def handler(event, context):
return {'enhancer': 'abc'}
In the etl directory, we can now create the function by running the following command.
tc create -s <sandbox-name> -e <env>
Example: tc create -s john -e dev
This creates a lambda function named enhancer_john
with the base role (tc-base-lambda-role) as the execution role.
AWS Lambda is the default implementation for the function entity. env here is typically the AWS profile.
3. Namespace your functions
Our etl
directory now contains just one function called enhancer
. Let's create the transformer
and loader
functions. Add the following files.
etl/transformer/handler.py
def handler(event, context):
return {'transformer': 'ABC'}
loader/handler.py
def handler(event, context):
return {'transformer': 'ABC'}
We should have the following directory contents:
etl
|-- enhancer
| `-- handler.py
|-- loader
| `-- handler.py
|-- topology.yml
`-- transformer
`-- handler.py
Now that we have these 3 functions, we may want to collectively call them as etl
. Let's create a file named topology.yml
with the following contents:
name: etl
name
is the namespace of these collection of functions.
Now in the etl directory, we can run the following command to create a sandbox
tc create -s john -e dev
You should see the following output
Compiling topology
Resolving topology etl
1 nodes, 3 functions, 0 mutations, 0 events, 0 routes, 0 queues
Building transformer (python3.10/code)
Building enhancer (python3.10/code)
Building loader (python3.10/code)
Creating functor etl@john.dev/0.0.1
Creating function enhancer (211 B)
Creating function transformer (211 B)
Creating function loader (211 B)
Checking state enhancer (ok)
Time elapsed: 5.585 seconds
The resulting lambda functions are named 'namespace_function-name_sandbox'. If the name is sufficiently long, tc abbreviates it
We can test these functions, independently
cd enhancer
tc invoke -s john -e dev -p '{"somedata": 123}'
The word service is overloaded. tc encourages the use of functor or topology to define the collection of entities.
4. Define the function DAG (flow)
Now that we have these functions working in isolation, we may want to create a DAG of these functions. Let's define that flow:
name: etl
functions:
enhancer:
root: true
function: transformer
transformer:
function: loader
loader:
end: true
tc
dynamically figures out the orchestrator to use. By default, it uses Stepfunction (Express) to orchestrate the flow. tc
automatically generates an intimidating stepfunction definition. You can inspect that by running tc compile -c flow
tc update -s john -e dev
to update and create the flow.
5. Add a REST API to invoke ETL
name: etl
routes:
/api/etl:
method: POST
function: enhancer
functions:
enhancer:
root: true
function: transformer
transformer:
function: loader
loader:
event: Notify
Run tc update -s john -e dev -c routes
to update the routes.
6. Notify on completion
name: etl
routes:
/api/etl:
method: POST
function: enhancer
functions:
enhancer:
root: true
function: transformer
transformer:
function: loader
loader:
event: Notify
events:
Notify:
channel: Subscription
channels:
Subscription:
function: default
Let's make loader
output an event that pushes the status message to a websocket channel. tc update -s john -e dev
to create/update the events and channels.
curl https://seuz7un8rc.execute-api.us-west-2.amazonaws.com/test/start-etl -X POST -d '{"hello": "world"}'
=> {"enhancer": "abc"}
7. Implement the functions
So far, we created a topology with basic functions, events, routes and a flow to connect them all. The functions themselves don't do much. Functions have depedencies, different runtimes or languages, platform-specific shared libraries and so forth. For example, we have want the enhancer to have some dependencies specified in say pyproject.toml or requirements.txt. Let's add a file named function.json
in enhancer directory
enhancer/function.json
{
"name": "enhancer",
"description": "Ultimate enhancer",
"runtime": {
"lang": "python3.12",
"package_type": "zip",
"handler": "handler.handler",
},
"build": {
"kind": "Inline",
"command": "zip -9 -q lambda.zip *.py"
},
}
and let's say we had the following deps in pyproject.toml
enhancer/pyproject.toml
[tool.poetry]
name = "enhancer"
version = "0.1.0"
description = ""
authors = ["fu <foo@fubar.com>"]
[tool.poetry.dependencies]
simplejson = "^3.19.2"
botocore = "^1.31.73"
boto3 = "^1.28.73"
pyyaml = "6.0.2"
Now update the function we created by running this from the etl
directory
tc update -s john -e dev -c enhancer
The above command will build the dependencies in a docker container locally and update the function code with the depedencies.
-c argument takes an entity category (events, functions, mutations, routes etc) or the name of the entity itself. In this case the function name.
There are several ways to package the depedencies depending on the runtime, size of the dependencies and so forth. Layering is another kind. Let's try and build the transformer using layers. Add the following in transformer/function.json
transformer/function.json
{
"name": "transformer",
"description": "Ultimate Transformer",
"runtime": {
"lang": "python3.12",
"package_type": "zip",
"handler": "handler.handler",
"layers": ["transformer-deps"]
}
}
The layers can be built independent of creating/deploying the code, as they don't change that often.
tc build --kind layer --name transformer-deps --publish -e dev
tc update -s john -e dev -c layers
With the above command, we built the dependencies in a docker container and updated the function(s) to use the latest version of the layer. See Build for details about building functions.
8. Making it recursive
We can make loader itself another sub-topology with it's own DAG of entities and still treat etl as the root topology (or functor). Let's add a topology file in loader.
etl/loader/topology.yml
name: loader
Now we can recursrively create the topologies from the root topology directory
tc create -s john -e dev --recursive
9. Configuring infrastructure
At times, we require more infrastructure-specific configuration, specific permissions, environment variables, runtime configuration.
We can specify an infra path in the topology
name: etl
infra: "../infra/etl"
routes: ..
In the specified infra directory, we can add environment/runtime variables for let's say enhancer.
../infra/etl/vars/enhancer.json
{
"memory_size": 1024,
"timeout": 800,
"environment": {
"GOOGLE_API_KEY": "ssm:/goo/api-key",
"KEY": "VALUE"
},
"tags": {
"developer": "john"
}
}
If we need specific IAM permissions, do
../infra/etl/roles/enhancer.json
{
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:ListBucketVersions",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket/*",
"arn:aws:s3:::bucket"
],
"Sid": "AllowAccessToS3Bucket1"
}
],
"Version": "2012-10-17"
}
We may also need additional configuration that are specific to the provider (AWS etc). Add a key called config with the value as path to the file.
name: etl
infra: "../infra/etl"
config: "../tc.yaml"
routes: ..
See Config
Compiler
tc compile
does the following:
- Discovers functions recursively in the current directory.
- Generates build instructions for the discovered functions.
- Interns remote, shared and local functions
- Reads the topology.yml file and validates it using input specification
- Generates the target representations for these entities specific to a provider
- Generates graphql output for mutations definition in topology.yml
- Transpiles flow definitions to stepfn etc.
- Generates checksum of all function directories
- Detects circular flows
To generate the topology in curent directory
tc compile [--recursive]
To generate tree of all functions
cd examples/apps/retail
tc compile -c functions -f tree
retail
├╌╌ payment
┆ ├╌╌ payment_stripe_{{sandbox}}
┆ ┆ ├╌╌ python3.10
┆ ┆ ├╌╌ provided
┆ ┆ └╌╌
┆ └╌╌ payment_klarna_{{sandbox}}
┆ ├╌╌ python3.10
┆ ├╌╌ provided
┆ └╌╌
├╌╌ pricing
┆ └╌╌ pricing_resolver_{{sandbox}}
┆ ├╌╌ python3.10
┆ ├╌╌ provided
┆ └╌╌
Builder
tc
has a sophisticated builder
that can build different kinds of artifacts with various language runtimes (Clojure, Janet, Rust, Ruby, Python, Node)
In the simplest case, when there are no dependencies in a function, we can specify how the code is packed (zipped) as follows in function.json
:
{
"name": "simple-function",
"runtime": {
"lang": "python3.10",
"package_type": "zip",
"handler": "handler.handler",
},
"build": {
"command": "zip -9 lambda.zip *.py",
"kind": "Code"
}
}
and then tc create -s <sandbox> -e <env>
builds this function using the given command
and creates it in the given sandbox and env.
Inline
The above is a pretty trivial example and it gets complicated as we start adding more dependencies. If the dependencies are reasonably small (< 50MB), we can inline those in the code's artifact (lambda.zip).
{
"name": "python-inline-example",
"runtime": {
"lang": "python3.12",
"package_type": "zip",
"handler": "handler.handler",
"layers": []
},
"build": {
"kind": "Inline",
"command": "zip -9 -q lambda.zip *.py"
},
"test": {
"fixture": "python run-fixture.py",
"command": "poetry test"
}
}
tc create -s <sandbox> -e <env>
will implicitly build the artifact with inlined deps and create the function in the given sandbox and env. The dependencies are typically in lib/
including shared objects (.so files).
tc builds the inlined zip using docker and the builder image that is compatible with the lambda runtime image.
Layer
If inline
build is heavy, we can try to layer the dependencies:
{
"name": "ppd",
"description": "my python layer",
"runtime": {
"lang": "python3.10",
"package_type": "zip",
"handler": "handler.handler",
"layers": ["ppd-layer"]
},
"build": {
"pre": [
"yum install -y git",
"yum install -y gcc gcc-c++"
],
"kind": "Layer",
}
}
Note that we have specified the list of layers the function uses. The layer itself can be built independent of the function, unlike Inline
build kind.
tc build --kind layer
tc publish --name ppd-layer
We can then create or update the function with this layer. At times, we may want to update just the layers in an existing sandboxed function
tc update -s <sandbox> -e <env> -c layers
AWS has a limit on the number of layers and size of each zipped layer. tc automatically splits the layer into chunks if it exceeds the size limit (and still within the upper total limit of 256MB)
Image
While Layer
and Inline
build kind should suffice to pack most dependencies, there are cases where 250MB is not good enough. Container Image
kind is a good option. However, building the deps and updating just the code is challenging using pure docker as you need to know the sequence to build. tc
provides a mechanism to build a tree
of images. For example:
{
"name": "python-image-tree-example",
"runtime": {
"lang": "python3.10",
"package_type": "image",
"handler": "handler.handler"
},
"build": {
"kind": "Image",
"images": {
"base": {
"version": "0.1.1",
"commands": [
"yum install -y git wget unzip",
"yum install -y gcc gcc-c++ libXext libSM libXrender"
]
},
"code": {
"parent": "base",
"commands": []
}
}
}
}
In the above example, we define the base
image with dependencies and code
image that packs just the code. Note that code
references base
as the parent. Effectively, we can build a tree of images (say base dependencies, models, assets and code). These images
can be built at any point in the lifecycle of the function. To build the base
image do:
tc build --image base --publish
When --publish
is specified, it publishes to the configured ECR repo [See Configuration]. Alternatively, TC_ECR_REPO
env variable can be specified to override the config. The value of variable is the ECR repo URI
With python functions, the image can be built either by having a 'requirements.txt' file in the function directory or a pyproject.toml. tc build
works with requirements.txt and poetry.
When all "parent" images have been built, tc create
will create the code
image just-in-time. The tag is the SHA1 checksum of the function directory. The code tag is typically of the format "{{repo}}/code:req-0d4043e5ae0ebc83f486ff26e8e30f3bd404b707""
We can also optionally build the code
image.
tc build --image code --publish
Note that the child image uses the parent's version of the image as specified in the parent's block.
Syncing base images
While we can docker pull
the base and code images locally, it is cumbersome to do it for all functions recursively by resolving their versions. tc build --sync
pulls the base and code images based on current function checksums. Having a copy the base or parent code images allows us to do incremental updates much faster.
Inspecting the images
We can run tc build --shell
in the function directory and access the bash shell. The shell is always on the code
image of the current function checksum. Note that the code
image using the Lambda Runtime Image as the source image.
External parent image
At times, we may need to use a parent image that is shared and defined in another function or build. The following function definition is an example that shows how to specify a parent URI in code image-spec.
{
"name": "req-external-example",
"description": "With external parent",
"runtime": {
"lang": "python3.10",
"package_type": "image",
"handler": "handler.handler"
},
"build": {
"kind": "Image",
"images": {
"code": {
"parent": "{{repo}}/base:req-0.1.1",
"commands": []
}
}
}
}
parent
in the code
image-spec is an URI. This is also a way to pin the parent image.
Slab
slab
is an abstraction for building depedencies, assets and serving it via a network filesystem (EFS). An example function with slab looks like:
{
"name": "python-example-snap",
"description": "example function",
"runtime": {
"lang": "python3.12",
"package_type": "zip",
"mount_fs": true,
"handler": "handler.handler",
"layers": []
},
"build": {
"kind": "slab"
}
"test": {
"fixture": "python run-fixture.py",
"command": "poetry test"
}
}
tc build --kind slab --publish
This publishes the slab to EFS as configured (See Configuration)
Library
A library is a kind of build that recursively packs a collection of directories to serve as a single library in the target runtime.
For example, let's say we have the following directory structure
lib/
|-- bar
| `-- lib.rb
|-- baz
| `-- lib.rb
`-- foo
`-- lib.rb
We can pack this as a library and publish it as a layer or a node in the image-tree. By default, tc publishes it as a layer.
cd lib
tc build --kind library --name mylib --publish --lang ruby
Why can't this just be of kind layer
? Layers typically have the dependencies resolved. Library is just standalone.
Extension
Lambda extensions are like sidecars that intercept the input/output payload events and can do arbitrary processing on them.
tc build --kind extension
Recursive Builds
To traverse through the topology and build the depedencies or code in parallel, do the following:
tc build [-kind code|image|layer] --recursive --publish
Deployer
Creating a Sandbox
cd <topology-dir>
tc create [--sandbox SANDBOX] [-e ENV]
Incremental updates
While developing, we often need to incrementally deploy certain components without recreating the entire topology. tc
provides an update command that updates given component(s).
To update the code for a function (say page-mapper) in the current directory
tc update --sandbox test -e dev-af -c page-mapper
To update the IAM roles and policies
tc update --sandbox test -e dev-af -c roles
To update the eventbridge event rules:
tc update --sandbox test -e dev-af -c events
To update the environment variables or runtime parameters. Usually these are defined in infrastucture/tc/
tc update --sandbox test -e dev-af -c vars
To build and update layers
tc update --sandbox test -e dev-af -c layers
To update the Statemachine flow
tc update --sandbox test -e dev-af -c flow
To update tags across stepfns, lambdas, roles, policies, eventbridge rules etc
tc update --sandbox test -e dev-af -c tags
To update logging and tracing config
tc update --sandbox test -e dev-af -c logs
Note that update works on unfrozen sandboxes. Most stable sandboxes are immutable and thus update is disabled for those. To mutate, unfreeze it.
Invoker
Specifying Payload
To simply invoke a functor
tc invoke --sandbox main --env dev
By default, tc picks up a payload.json
file in the current directory. You could optionally specify a payload file
tc invoke --sandbox main --env dev --payload payload.json
or via stdin
cat payload.json | tc invoke --sandbox main --env dev
or as a param
tc invoke --sandbox main --env dev --payload '{"data": "foo"}'
Invoking Events and Lambdas
By default, tc
invokes a stepfn. We can also invoke a lambda or trigger an Eventbridge event
tc invoke --kind lambda -e dev --payload '{"data"...}'
tc invoke --kind event -e dev --payload '{"data"...}'
Emulator
Lambdas
To emulate the Lambda Runtime environment. The following command spins up a docker container with the defined layers in function.json, sets up the paths, environment variables, AWS access, local code and runtime parameters (mem, handlers etc)
cd <function-dir>
tc emulate
To run in foreground
tc emulate
You can now invoke a payload locally with this emulator
tc invoke --local [--payload <payload.json | json-str>]
Stepfunctions
tc
also provides a stepfunction emulator. In your top-level topology directory, do:
tc emulate
This spins up a container and runs the emulator on http://localhost:8083
Details to follow on creating and executing [wip]
Inspector
tc
provides a lightweight http-based app to inspect the topologies. This is still experimental.
To run the inspector, run tc inspect --trace
in the root topology directory. For example:
cd examples/patterns
tc inspect --trace
Releaser
Workflow
Versioning
tc provides a sophisticated releaser module that can version at any level in the topology tree. Instead of managing the versions of each function, route, flow etc, we create a release tag at the top-level
tc tag --service <namespace> --next minor|major
This creates a tag with the etl prefix.
Changelog
To see the changelog of a specific topology
cd topology-dir
tc changelog
AI-123 Another command
AI-456 Thing got added
# or
tc changelog --between 0.8.1..0.8.6
To search for a specific text in all changelogs
cd root-topology-dir
tc changelog --search AI-1234
=> topology-name, version
Snapshotter
The snapshotter module takes a snapshot of a given sandbox and outputs the same datastructure as the compiler output. This isomorphic
characteristic is useful to see the diffs between sandboxes.
For example the following outputs a JSON with the topology structure
cd topology-dir
tc snapshot -s stable -e qa -c topology
{
"events": {},
"functions": {
"n_f_stable": {
"code_size": "100 KB",
"layers": {},
"mem": 1024,
"name": "n_f_stable",
"revision": "e568c2865203",
"tc_version": "0.8.71",
"timeout": 180,
"updated": "06:05:2025-15:48:32"
},
...
Versions
To see versions of all root topologies in a given sandbox:
tc snapshot -s stable -e qa -f table|json
tc snapshot -s stable -e qa -f table
namespace | sandbox | version | frozen | tc_version | updated_at
------------+---------+----------+--------+------------+---------------------
node1 | stable | 0.0.6 | | 0.0.3 | 04:09:2025-18:20:42
node2 | stable | 0.0.14 | | 0.0.3 | 04:09:2025-18:19:15
node3 | stable | 0.0.15 | | 0.0.3 | 04:09:2025-18:19:28
node11 | stable | 0.0.15 | | 0.0.3 | 04:09:2025-18:19:28
node12 | stable | 0.0.2 | | 0.6.262 | 12:13:2024-06:46:57
To see versions across profiles for a sandbox, provide a csv of profiles/envs:
tc snapshot -s stable -e qa,staging
Topology | qa | staging
------------+----------+----------
node2 | 0.0.27 | 0.0.24
node3 | 0.0.6 | 0.0.6
node4 | 0.0.15 | 0.0.15
node5 | 0.0.26 | 0.0.26
node7 | 0.12.125 | 0.12.125
node8 | 0.2.29 | 0.2.29
node9 | 0.2.102 | 0.2.102
node10 | 0.1.24 | 0.1.24
node12 | 0.0.147 | 0.0.143
Topology Specification
topology.yml
name: <namespace>
infra: <infra-path>
nodes:
ignore: [<path>]
dirs: [<path>]
functions:
FunctionName:
uri: <String>
function: <String>
event: <String>
queue: <String>
runtime: RuntimeSpec
build: BuildSpec
events:
EventName:
producer: <String>
doc_only: <false>
nth: <sequence int>
filter: <String>
rule_name: <String>
functions: [<String>]
function: <String>
mutation: <String>
channel: <String>
queue: <String>
state: <String>
routes:
Path:
gateway: <String>
authorizer: <String>
method: <POST|GET|DELETE>
path: <String>
sync: <true>
request_template: <String>
response_template: <String>
stage: <String>
stage_variables: <String>
function: <String>
state: <String>
queue: <String>
channels:
ChannelName:
function: <String>
event: <String>
mutations:
MutationName:
function: <String>
queues:
QueueName:
function: <String>
states: ./states.json | <definition> [optional]
infra
is either an absolute or relative path to the infrastructure configs (vars, roles etc). This field is optional and tc tries best to discover the infrastructure path in the current git repo.
events
, routes
, functions
, mutations
, channels
and flow
are optional.
flow
can contain a path to a step-function definition or an inline definition. tc automatically namespaces any inlined or external flow definition.
Entity Matrix
Not all entities are composable with each other. The following shows the compatibility Matrix and their implementation status
Function | Event | Queue | Route | Channel | Mutation | |
---|---|---|---|---|---|---|
Function | No* | No | No* | No | No | No |
Event | Yes | No | No | No | Yes | No |
Route | Yes | No* | No* | - | No | No |
Queue | Yes | No | - | No | No | No |
Channel | Yes | Yes | No | No | - | No |
Mutation | Yes | No* | No | No | No | - |
Function Specification
function.json file in the function directory is optional. tc
infers the language and build instructions from the function code. However, for custom options, add a function.json that looks like the following
{
"name": String,
"runtime": RuntimeSpec,
"build": BuildSpec,
"infra": InfraSpec,
"test": TestSpec
}
RuntimeSpec
Key | Default | Optional? | Comments |
---|---|---|---|
lang | Inferred | yes | |
handler | handler.handler | ||
package_type | zip | possible values: zip, image | |
uri | file:./lambda.zip | ||
mount_fs | false | yes | |
snapstart | false | yes | |
memory | 128 | yes | |
timeout | 30 | yes | |
provisioned_concurrency | 0 | yes | |
reserved_concurrency | 0 | yes | |
layers | [] | yes | |
extensions | [] | yes | |
environment | {} | yes | Environment variables |
BuildSpec
JSON Spec
{
"name": "string",
// Optional
"dir": "string",
// Optional
"description": "string",
// Optional
"namespace": "string",
// Optional
"fqn": "string",
// Optional
"layer_name": "string",
// Optional
"version": "string",
// Optional
"revision": "string",
// Optional
"runtime": {
"lang": "Python39" | "Python310" | "Python311" | "Python312" | "Python313" | "Ruby32" | "Java21" | "Rust" | "Node22" | "Node20",
"handler": "string",
"package_type": "string",
// Optional
"uri": "string",
// Optional
"mount_fs": true,
// Optional
"snapstart": true,
"layers": [
"string",
/* ... */
],
"extensions": [
"string",
/* ... */
]
},
// Optional
"build": {
"kind": "Code" | "Inline" | "Layer" | "Slab" | "Library" | "Extension" | "Runtime" | "Image",
"pre": [
"dnf install git -yy",
/* ... */
],
"post": [
"string",
/* ... */
],
// Command to use when build kind is Code
"command": "zip -9 lambda.zip *.py",
"images": {
"string": {
// Optional
"dir": "string",
// Optional
"parent": "string",
// Optional
"version": "string",
"commands": [
"string",
/* ... */
]
},
/* ... */
},
"layers": {
"string": {
"commands": [
"string",
/* ... */
]
},
/* ... */
}
},
// Optional
"infra": {
"dir": "string",
// Optional
"vars_file": "string",
"role": {
"name": "string",
"path": "string"
}
}
}
Infrastructure Spec
Runtime Variables
Default Path: infrastructure/tc/
{
// Optional
"memory_size": 123,
// Optional
"timeout": 123,
// Optional
"image_uri": "string",
// Optional
"provisioned_concurrency": 123,
// Optional
"reserved_concurrency": 123,
// Optional
"environment": {
"string": "string",
/* ... */
},
// Optional
"network": {
"subnets": [
"string",
/* ... */
],
"security_groups": [
"string",
/* ... */
]
},
// Optional
"filesystem": {
"arn": "string",
"mount_point": "string"
},
// Optional
"tags": {
"string": "string",
/* ... */
}
}
Roles
Config Specification
The following is a sample config file that you can place in your infrastructure root (infrastructure/tc/) or the path in TC_CONFIG_PATH
. The configs have sections specific to the module and are optional with sane defaults.
compiler:
verify: false
graph_depth: 4
default_infra_path: infrastructure/tc
resolver:
incremental: false
cache: false
stable_sandbox: stable
layer_promotions: true
deployer:
guard_stable_updates: true
rolling: false
builder:
parallel: false
autosplit: true
max_package_size: 50000000
ml_builder: true
aws:
eventbridge:
bus: EVENT_BUS
rule_prefix: tc-
default_role: tc-base-event-role
default_region: us-west-2
sandboxes: ["stable"]
ecs:
subnets: ["subnet-tag"]
cluster: my-cluster
stepfunction:
default_role: tc-base-sfn-role
default_region: us-west-2
lambda:
default_timeout: 180
default_role: tc-base-lambda-role
default_region: us-west-2
layers_profile: LAYER_AWS_PROFILE
fs_mountpoint: /mnt/assets
api_gateway:
api_name: GATEWAY_NAME
default_region: us-west-2
Environment variables
tc
uses special environment variables as feature bits and config overrides. The following is the list of TC environment variables:
TC_DIR
We don't have to always be in the topology or function directory to run a contextual tc command. TC_DIR env var sets the directory context
TC_DIR=/path/to/services/fubar tc create -s sandbox -e env
TC_USE_STABLE_LAYERS
At times we may need to use stable layers in non-stable sandboxes. This env variable allows us to use stable layers
TC_USE_STABLE_LAYERS=1 tc create -s sandbox -e env
TC_USE_SHARED_DEPS
This feature flag uses common deps (in EFS) instead of function-specific deps.
TC_USE_SHARED_DEPS=1 tc create -s sandbox -e env
TC_FORCE_BUILD
Tries various fallback strategies to build layers. One of the strategies is to build locally instead of a docker container. Another fallback is to use a specific version of python even if the transitive dependencies need specific version of Ruby or Python
TC_FORCE_BUILD=1 tc build --trace
TC_FORCE_DEPLOY
To create or update stable sandboxes (which are prohibited by default), use this var to override.
TC_FORCE_DEPLOY=1 tc create -s sandbox -e env
TC_UPDATE_METADATA
To update deploy metadata
to a dynamodb table (the only stateful stuff in TC) for stable sandboxes
TC_UPDATE_METADATA=1 tc create -s staging -e env
TC_ECS_CLUSTER
Use this to override the ECS Cluster name
TC_ECS_CLUSTER=my-cluster tc create -s sandbox -e env
TC_USE_DEV_EFS
Experimental EFS with deduped deps and models
TC_USE_DEV_EFS=1 tc create ...
TC_SANDBOX
Set this to have a fixed sandbox name for all your sandboxes
TC_SANDBOX=my-branch tc create -e env
CLI Reference
This document contains the help content for the tc
command-line program.
Command Overview:
tc
↴tc bootstrap
↴tc build
↴tc cache
↴tc compile
↴tc config
↴tc create
↴tc delete
↴tc freeze
↴tc emulate
↴tc inspect
↴tc invoke
↴tc list
↴tc publish
↴tc resolve
↴tc route
↴tc scaffold
↴tc test
↴tc tag
↴tc unfreeze
↴tc update
↴tc upgrade
↴tc version
↴tc doc
↴
tc
Usage: tc <COMMAND>
Subcommands:
bootstrap
— Bootstrap IAM roles, extensions etcbuild
— Build layers, extensions and pack function codecache
— List or clear resolver cachecompile
— Compile a Topologyconfig
— Show configcreate
— Create a sandboxed topologydelete
— Delete a sandboxed topologyfreeze
— Freeze a sandbox and make it immutableemulate
— Emulate Runtime environmentsinspect
— Inspect via browserinvoke
— Invoke a topology synchronously or asynchronouslylist
— List created entitiespublish
— Publish layersresolve
— Resolve a topology from functions, events, states descriptionroute
— Route events to functorsscaffold
— Scaffold roles and infra varstest
— Run unit tests for functions in the topology dirtag
— Create semver tags scoped by a topologyunfreeze
— Unfreeze a sandbox and make it mutableupdate
— Update componentsupgrade
— upgrade tc versionversion
— display current tc versiondoc
— Generate documentation
tc bootstrap
Bootstrap IAM roles, extensions etc
Usage: tc bootstrap [OPTIONS]
Options:
-R
,--role <ROLE>
-e
,--profile <PROFILE>
--create
--delete
--show
-t
,--trace
tc build
Build layers, extensions and pack function code
Usage: tc build [OPTIONS]
Options:
-e
,--profile <PROFILE>
-k
,--kind <KIND>
-n
,--name <NAME>
-i
,--image <IMAGE>
--clean
-r
,--recursive
--dirty
--merge
--split
--task <TASK>
-t
,--trace
-p
,--publish
tc cache
List or clear resolver cache
Usage: tc cache [OPTIONS]
Options:
--clear
--list
-n
,--namespace <NAMESPACE>
-e
,--env <ENV>
-s
,--sandbox <SANDBOX>
-t
,--trace
tc compile
Compile a Topology
Usage: tc compile [OPTIONS]
Options:
--versions
-r
,--recursive
-c
,--component <COMPONENT>
-f
,--format <FORMAT>
-t
,--trace
tc config
Show config
Usage: tc config
tc create
Create a sandboxed topology
Usage: tc create [OPTIONS]
Options:
-e
,--profile <PROFILE>
-R
,--role <ROLE>
-s
,--sandbox <SANDBOX>
-T
,--topology <TOPOLOGY>
--notify
-r
,--recursive
--no-cache
-t
,--trace
tc delete
Delete a sandboxed topology
Usage: tc delete [OPTIONS]
Options:
-e
,--profile <PROFILE>
-R
,--role <ROLE>
-s
,--sandbox <SANDBOX>
-c
,--component <COMPONENT>
-r
,--recursive
--no-cache
-t
,--trace
tc freeze
Freeze a sandbox and make it immutable
Usage: tc freeze [OPTIONS] --sandbox <SANDBOX>
Options:
-d
,--service <SERVICE>
-e
,--profile <PROFILE>
-s
,--sandbox <SANDBOX>
--all
-t
,--trace
tc emulate
Emulate Runtime environments
Usage: tc emulate [OPTIONS]
Options:
-e
,--profile <PROFILE>
-s
,--shell
-d
,--dev
-t
,--trace
tc inspect
Inspect via browser
Usage: tc inspect [OPTIONS]
Options:
-t
,--trace
tc invoke
Invoke a topology synchronously or asynchronously
Usage: tc invoke [OPTIONS]
Options:
-p
,--payload <PAYLOAD>
-e
,--profile <PROFILE>
-R
,--role <ROLE>
-s
,--sandbox <SANDBOX>
-n
,--name <NAME>
-S
,--step <STEP>
-k
,--kind <KIND>
--local
--dumb
-t
,--trace
tc list
List created entities
Usage: tc list [OPTIONS]
Options:
-e
,--profile <PROFILE>
-r
,--role <ROLE>
-s
,--sandbox <SANDBOX>
-c
,--component <COMPONENT>
-f
,--format <FORMAT>
-t
,--trace
tc publish
Publish layers
Usage: tc publish [OPTIONS]
Options:
-e
,--profile <PROFILE>
-R
,--role <ROLE>
-k
,--kind <KIND>
--name <NAME>
--list
--promote
--demote
--download
--version <VERSION>
--task <TASK>
--target <TARGET>
-t
,--trace
tc resolve
Resolve a topology from functions, events, states description
Usage: tc resolve [OPTIONS]
Options:
-e
,--profile <PROFILE>
-R
,--role <ROLE>
-s
,--sandbox <SANDBOX>
-c
,--component <COMPONENT>
-q
,--quiet
-r
,--recursive
--diff
--no-cache
-t
,--trace
tc route
Route events to functors
Usage: tc route [OPTIONS] --service <SERVICE>
Options:
-e
,--profile <PROFILE>
-E
,--event <EVENT>
-s
,--sandbox <SANDBOX>
-S
,--service <SERVICE>
-r
,--rule <RULE>
--list
-t
,--trace
tc scaffold
Scaffold roles and infra vars
Usage: tc scaffold
tc test
Run unit tests for functions in the topology dir
Usage: tc test [OPTIONS]
Options:
-d
,--dir <DIR>
-l
,--lang <LANG>
--with-deps
-t
,--trace
tc tag
Create semver tags scoped by a topology
Usage: tc tag [OPTIONS]
Options:
-n
,--next <NEXT>
-s
,--service <SERVICE>
--dry-run
--push
--unwind
-S
,--suffix <SUFFIX>
-t
,--trace
tc unfreeze
Unfreeze a sandbox and make it mutable
Usage: tc unfreeze [OPTIONS] --sandbox <SANDBOX>
Options:
-d
,--service <SERVICE>
-e
,--profile <PROFILE>
-s
,--sandbox <SANDBOX>
--all
-t
,--trace
tc update
Update components
Usage: tc update [OPTIONS]
Options:
-e
,--profile <PROFILE>
-R
,--role <ROLE>
-s
,--sandbox <SANDBOX>
-c
,--component <COMPONENT>
-a
,--asset <ASSET>
--notify
-r
,--recursive
--no-cache
-t
,--trace
tc upgrade
upgrade tc version
Usage: tc upgrade [OPTIONS]
Options:
-v
,--version <VERSION>
-t
,--trace
tc version
display current tc version
Usage: tc version
tc doc
Generate documentation
Usage: tc doc [OPTIONS]
Options:
-s
,--spec <SPEC>