Welcome to Tecton Online Inference Python Client’s documentation!
Tecton provides a low-latency feature server that exposes HTTP endpoints to retrieve feature values and metadata from the online store. These endpoints are typically used during model predictions. The feature servers retrieve data from the online store and perform any additional aggregations and filtering as necessary. For more information on the HTTP API, see the Tecton HTTP API documentation.
This is a Python client library to make it easy to call the feature server endpoints from your Python code.
This library is currently under development. Please contact Tecton support if you have any questions.