All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
Changes are grouped as follows
Addedfor new features.Changedfor changes in existing functionality.Deprecatedfor soon-to-be removed features.Removedfor now removed features.Fixedfor any bug fixes.Securityin case of vulnerabilities.
- In the
unstablepackage: Improved error logging and error messages when config revision changes are detected
- In the
unstablepackage: Fixedhandle.set_service_running()call to report service status to Windows Service Control Manager
- In the
unstablepackage: Fixed a metrics pickling error — the runtime no longer passes aMetricsinstance; extractors are now responsible for creating and managing their ownMetricsinstance.
CognitePushernow automatically creates timeseries for late-registered metrics (e.g., python_gc_*, python_info), preventing "Time series not found" errors during shutdown.
- In the
unstablepackage: Removeupload-queue-size,retry_startupand dataset releated config from the baseExtractorConfig.
- Removed manual setting of
is_uploadedproperty onCogniteExtractorFileApplyobjects during file upload to allow SDK to manage upload status.
- Added support for dataset, dataset_external_id and upload_queue_size in the Extractor Base Config.
- Updated string time series datapoints maximum size to be constrained by UTF-8 encoded bytes count instead of string length when using upload queue(s), aligning with the API behavior changes.
- Fixed file reading by falling back to system encoding if UTF-8 fails.
- Ability for users to customize the working directory of the extractor using the
--cwdflag in the CLI. - Added a new command-line argument, --log-level (-l) to allow users to override configured log levels for a single run.
- Added a new --service CLI argument to allow users to run the application as a Windows service.
- Fixed encoding issue on Windows by enforcing UTF-8 for consistent behavior across platforms.
- Added the
CDMTimeSeriesUploadQueueclass to upload time series datapoints to instances in the Cognite Data Model (CDM). This new uploader usesNodeIdfor identification and can automatically createCogniteExtractorTimeSeriesApplyinstances if they are missing.
- Refactored the time series uploaders for better code reuse and maintainability. A new generic base class,
BaseTimeSeriesUploadQueue, has been introduced to contain common logic for datapoint validation and sanitization. Both the existingTimeSeriesUploadQueueand the newCDMTimeSeriesUploadQueuenow inherit from this base class.
- Added a common method
_report_runthat can be called from any extractors using this library to report a extractor run into Extractor Pipelines. This automatically handles truncating the message to less than 1000 characters if required.
- Fixed
_report_successcalls failing if thesuccess_messagewas longer than 1000 characters
- In the
unstablepackage:-l/--local-overrideis renamed to-f/--force-local-config
- In the
unstablepackage: Proper retries when fetching configs from CDF
- Fixed loading of prometheus server config
- Disabled recursive search for .env file. The change will prevent loading environment variables hosted in directories other than the current working directory. Environment variable management is now more explicit to enhance security, and reduce potential misconfigurations.
- Allow users to override what message to send on shutdown
- In the
unstablepackage: Add afrom_environmentmethod toConnectionConfig
- Locked the
dacitedependency to<1.9.0to avoid issues with the new release.
- In the
unstablepackage: Task description - In the
unstablepackage: Convenience methods to create scheduled tasks (ie,from_interval,from_cron) - In the
unstablepackage: ATaskContextclass, which exposes a logging interface to use for logging and error reporting within a task.
- In the
unstablepackage: Don't base tasks on dataclasses - In the
unstablepackage: Error reporting has changed from a singleerror(level=..., description=..., ...)method to several more specific methods, with the goal of being more convenient to use. E.g.self.fatal(desc)orself.warning(desc). As well as separating outbegin_error,begin_warningetc for long-lasting errors, to avoid theself.warning(...).instant()workaround. - In the
unstablepackage: Change task signature to take aTaskContextas an arg
- In the
unstablepackage: Correctly set theactionattribute on reported tasks
- In the
unstablepackage: update path and query params to adapt to changes in the unstable API - In the
unstablepackage: remove logic to attempt older config revisions, just fail if current config is invalid
- Loosened dependency restrictions on
0.*packages
- Added a
ssl_verifyargument to file uploaders to either use custom CA bundles or to diable SSL verification completely.
- Added a feature to report file upload errors
- Improved logging when uploading files
- Handling of files with size 0
- Fix issue caused by attempting to update file mimeType on AWS clusters.
- Revert change in 7.5.2, the root cause is a different issue.
- Avoid trying to update mime type when updating empty files.
- The file upload queue will now use
file_transfer_timeoutfrom the passedCogniteClient.
- File processing utils
CastableIntclass the represents an interger to be used in config schema definitions. The difference from usingintis that the field of this type in the yaml file can be either a string or a number, while a field of typeintmust be a number in yaml.PortNumberclass that represents a valid port number to be used in config schema definitions. Just likeCastableIntit can be a string or a number in the yaml file. This allows for example setting a port number using an environment variable.
- Fix file upload when private link is used
- Configuration for ignore regexp pattern
- Fix file metadata update
- Remove warnings from data models file uploads
- Updated cognite SDK version.
- Regression: Reverting change related to file_meta parameter in IOUploadQueue
- Added support for AWS file upload
- Updated cognite sdk version
- Upload to Core DM/Classic file.
- Use httpx to upload files to CDF instead of the python SDK. May improve performance on windows.
- Add additional validation to cognite config before creating a cognite client, to provide better error messages when configuration is obviously wrong.
- Produce a config error when missing token-url and tenant, instead of eventually
producing an
OAuth 2 MUST utilize httpserror when getting the token.
- Reformat log messages to not have newlines
- Fixed using the keyvault tag in remote config.
- Fixed an issue with the
retrydecorator where functions would not be called at all if the cancellation token was set. This resulted in errors with for example upload queues.
- An upload queue for data model instances.
- A new type of state store that stores hashes of ingested items. This can be used to detect changed RAW rows or data model instances.
- Update cognite-sdk version to 7.43.3
- Fixed an issue preventing retries in file uploads from working properly
- File external ID when logging failed file uploads
- Fixed a race condition in state stores and uploaders where a shutdown could result in corrupted state stores.
- Update type hints for the time series upload queue to allow status codes
cognite_exceptions()did not properly retry file uploads
- Enhancement of
7.0.5: more use cases covered (to avoid repeatedly fetching a new token). - When using remote config, the full local
idp-authenticationis now injected (some fields were missing).
- The file upload queue is now able to stream files larger than 5GiB.
- The background thread
ConfigReloadernow caches theCogniteClientto avoid repeatedly fetching a new token.
- Max parallelism in file upload queue properly can set larger values than the
max_workersin theClientConfigobject. - Storing states with the state store will lock the state store. This fixes an issue where iterating through a changing dict could cause issues.
- Fix file size upper limit.
- Support for files without content.
- Ensure that
CancellationToken.wait(timeout)only waits for at mosttimeout, even if it is notified in that time.
-
The file upload queues have changed behaviour.
- Instead of waiting to upload until a set of conditions, it starts uploading immedeately.
- The
upload()method now acts more like ajoin, wating on all the uploads in the queue to complete before returning. - A call to
add_to_upload_queuewhen the queue is full will hang until the queue is no longer full before returning, instead of triggering and upload and hanging until everything is uploaded. - The queues now require to be set up with a max size. The max upload
latencey is removed. As long as you use the queue in as a context (ie,
using
with FileUploadQueue(...) as queue:) you should not have to change anything in your code. The behaviour of the queue will change, it will most likely be much faster, but it will not require any changes from you as a user of the queue.
-
threading.Eventhas been replaced globally withCancellationToken. The interfaces are mostly compatible, thoughCancellationTokendoes not have aclearmethod. The compatibility layer is deprecated.- Replace calls to
is_setwith the propertyis_cancelled. - Replace calls to
setwith the methodcancel. - All methods which took
threading.Eventnow takeCancellationToken. You can usecreate_child_tokento create a token that can be canceled without affecting its parent token, this is useful for creating stoppable sub-modules that are stopped if a parent module is stopped. Notably, callingstopon an upload queue no longer stops the parent extractor, this was never intended behavior.
- Replace calls to
- The deprecated
middlewaremodule has been removed. set_event_on_interrupthas been replaced withCancellationToken.cancel_on_interrupt.
- You can now use
Pathas a type in your config files. CancellationTokenas a better abstraction for cancellation thanthreading.Event.
To migrate from version 6.* to 7, you need to update how you interract with
cancellation tokens. The type has now changed from Event to
CancellationToken, so make sure to update all of your type hints etc. There is
a compatability layer for the CancellationToken class, so that it has the same
methods as an Event (except for clear()) which means it should act as a
drop-in replacement for now. This compatability layer is deprected, and will be
removed in version 8.
If you are using file upload queues, read the entry in the Changed section. You will most likely not need to change your code, but how the queue behaves has changed for this version.
- File upload queues now reuse a single thread pool across runs instead of
creating a new one each time
upload()is called.
-
Option to specify retry exceptions as a dictionary instead of a tuple. Values should be a callable determining whether a specific exception object should be retied or not. Example:
@retry( exceptions = {ValueError: lambda x: "Invalid" not in str(x)} ) def func() -> None: value = some_function() if value is None: raise ValueError("Could not retrieve value") if not_valid(value): raise ValueError(f"Invalid value: {value}")
-
Templates for common retry scenarios. For example, if you're using the
requestslibrary, you can doretry(exceptions = request_exceptions())
- Default parameters in
retryhas changed to be less agressive. Retries will apply backoff by default, and give up after 10 retries.
- Aliases for keyvault config to align with dotnet utils
- Improved the state store retry behavior to handle both fundamental and wrapped network connection errors.
- Added support to retrieve secrets from Azure Keyvault.
- Added an optional
security-categoriesattribute to thecogniteconfig section.
- Fixed a type hint in the
post_upload_functionfor upload queues.
- Added
IOFileUploadQueueas a base class of bothFileUploadQueueandBytesUploadQueue. This is an upload queue for functions that produceBinaryIOto CDF Files.
- Correctly handle equality comparison of
TimeIntervalConfigobjects.
- Added ability to specify dataset under which metrics timeseries are created
- Improved the state store retry behavior to handle connection errors
- Fixed iter method on the state store to return an iterator
cognite-sdktov7
- Added iter method on the state store to return the keys of the local state dict
- Added
load_yaml_dicttoconfigtools.loaders.
- Fixed getting the config
typewhen!envwas used in the config file.
- Added len method on the state store to return the length of the local state dict
- Fix on find_dotenv call
- Update cognite-sdk version to 6.24.0
- Fixed the type hint for the
retrydecorator. The list of exception types must be given as a tuple, not an arbitrary iterable. - Fixed retries for sequence upload queue.
- Sequence upload queue reported number of distinct sequences it had rows for, not the number of rows. That is now changed to number of rows.
- When the sequence upload queue uploaded, it always reported 0 rows uploaded because of a bug in the logging.
- Latency metrics for upload queues.
- Added support for queuing assets upload
- Timestamps before 1970 are no longer filtered out, to align with changes to the timeseries API.
- The event upload queue now upserts events. If creating an event fails due to the event already existing, it will be updated instead.
- Support for
connectionparameters
- Upload queue size limit now triggers an upload when the size has reached the limit, not when it exceeded the limit.
-
Legacy authentication through API keys has been removed throughtout the code base.
-
A few deprecated modules (
authentication,prometheus_logging) have been deleted.
-
uploaderandconfigtoolshave been changed from one module to a package of multiple modules. The content has been re-exported to preserve compatability, so you can still dofrom cognite.extractorutils.configtools import load_yaml, TimeIntervalConfig from cognite.extractorutils.uploader import TimeSeriesUploadQueue
But now, you can also import from the submodules directly:
from cognite.extractorutils.configtools.elements import TimeIntervalConfig from cognite.extractorutils.configtools.loaders import load_yaml from cognite.extractorutils.uploader.time_series import TimeSeriesUploadQueue
This has first and foremost been done to improve the codebase and make it easier to continue to develop.
-
Updated the version of the Cognite SDK to version 6. Refer to the changelog and migration guide for the SDK for details on the changes it entails for users.
-
Several small single-function modules have been removed and the content have been moved to the catch-all
utilmodule. This includes:-
The
add_extraction_pipelinedecorator from theextraction_pipelinesmodule -
The
throttled_loopgenerator from thethrottlemodule -
The
retrydecorator from theretrymodule
-
- Support for
audienceparameter inidp-authentication
The deletion of API keys and the legacy OAuth2 implementation should not affect
your extractors or your usage of the utils unless you were depending on the old
OAuth implementation directly and not through configtools or the base classes.
To update to version 5 of extractor-utils, you need to
-
Change where you import a few things.
-
Change from
from cognite.extractorutils.extraction_pipelines import add_extraction_pipeline
to
from cognite.extractorutils.util import add_extraction_pipeline
-
Change from
from cognite.extractorutils.throttle import throttled_loop
to
from cognite.extractorutils.util import throttled_loop
-
Change from
from cognite.extractorutils.retry import retry
to
from cognite.extractorutils.util import retry
-
-
Consult the migration guide for the Cognite SDK version 6 for details on the changes it entails for users.
The changes in this version are only breaking for your usage of the utils. Any extractor you have written will not be affected by the changes, meaning you do not need to bump the major version for your extractors.
- Default size of RAW queues is now 50 000 for
UploaderExtractors as well.
FileSizeConfigclass, similar toTimeIntervalConfig, that allows human readable formats such as1 kbor3.7 mib, as well as bytes directly. It then computes properties for bytes, kilobytes, etc.
- Fixed a bug in the state store when decimal valued incremental fields are used.
- Add support for certificate authentication
- Change minimum cognite-sdk version to 5.8
- Update cognite-sdk version to 5.1.1
- Fixed a bug in the error handling for reporting heartbeats.
-
Allow indexing state stores. You can now use the indexing notation to access values in a state store like you would do e.g. in a dictionary. Examples:
states = LocalStateStore(...) # You can now set states like so: states["id"] = (None, 5) # Getting current states: low, high = states["another_id"] # You can also check if an entry has a stored state with the 'in' operator: if "new_id" not in states: # do something you only do the first time you process an item
- Fixed a typo throughout the library,
cancelation_tokenis now calledcancellation_tokeneverywhere. If you ever e.g. specify a cancellation token in a keyword argument, make sure to update the spelling.
-
Allow any interval to be configured as a string in addition to integers, so e.g. upload intervals can be configured as
upload-interval: 2m
with
s/m/h/dbeing valid units, andsbeing implied when units are missing (to preserve backwards compatibility with old config files). If your extractor reads from these config fields you need to update to read theseconds(or the computedminutes,hoursordays) attribute instead of the fields directly, for example:RawUploadQueue( cognite_client, max_upload_interval=config.upload_interval, )
must be changed to
RawUploadQueue( cognite_client, max_upload_interval=config.upload_interval.seconds, )
In order to update from version 3 to version 4, you need to:
- Change
cancelation_tokentocancellation_tokenevery time you pass one as a keyword argument (or read from an attribute) - Access the
secondsattribute from any time values you read from the default config (such asupload_intervals ortimeouts) - Consider updating any time value you have in your own config parameters to be
of
TimeIntervalConfiginstead ofintto allow users the option of configuring time values in a human-readable format.
A type checker, like mypy, will be able to detect any breaking changes this update introduces. We highly reccomend scanning your code with a type checker.
Updating from version 3 to 4 should not introduce breaking changes to your extractor.
- Decorator which adds extraction pipeline functionality
- Correctly catch errors when reloading remote configuration files
- Support running extractor utils inside Cognite Functions
- Support for exposing prometheus metrics on a local port instead of pushing to pushgateway
- Remove old version warning from the cognite SDK.
- Correctly do not request the experimental SDK when using remote configuration files
SequenceUploadQueue'screate_missingfuntionality can now be used to set Name and Description values on newly created sequences.- Remote configuration files is now fully released and supported without using the experimental SDK.
- Dataset and linked asset information are correctly set on created sequences.
- Type hint for sequence column definitions updated to be more consistent.
- Option to set data set id when creating missing time series in the time series upload queue.
- Update cognite-sdk to version 4.0.1, which removes the support for reserved
environment variables such as
COGNITE_API_KEYandCOGNITE_CLIENT_ID.
- Python 3.7 support
Preview release of remote configuration of extractors. This allows users of your extractors to configure their extractors from CDF.
In this section we will go through how you can start using remote configuration in your extractors.
If you have based your extractor on the Extractor base class, remote
configuration is already implemented for your extractor without any need for
further changes to your code.
To include the feature, you must update cognite-extractor-utils to
2.3.0-beta1, and add a dependency to cognite-sdk-experimental version
>=0.76.0.
Otherwise, you must use the new ConfigResolver class in the configtools
module:
# With automatic CLI (ie, read config from command line args):
resolver = ConfigResolver.from_cli(
name="my extractor",
description="Short description of my extractor",
version="1.2.3",
config_type=MyConfig,
)
config: MyConfig = resolver.config
# With path to a yaml file:
resolver = ConfigResolver(
config_path="/path/to/config.yaml",
config_type=MyConfig
)
config: MyConfig = resolver.configThe resolver will automatically fetch configuration from CDF if remote
configuration is used, otherwise it will return the same as load_yaml.
When using the base class, you have the option to automatically detect new
config revisions, and do one of several predefined actions (keep in mind that
this is not exclusive to remote configs, if the extractor is running with a
local configuration that changes, it will do the same action). You specify which
with an reload_config_action enum. The enum can be one of the following values:
DO_NOTHINGwhich is the defaultREPLACE_ATTRIBUTEwhich will replace theconfigattribute on the object (keep in mind that if you are using therun_handleinstead of subclassing, this will have no effect). Be also aware that anything that is set up based on the config (upload queues, cognite client objects, loggers, connections to source, etc) will not change in this case.SHUTDOWNwill set thecancelation_tokenevent, and wait for the extractor to shut down. It is then intended that the service layer running the extractor (ie, windows services, systemd, docker, etc) will be configured to always restart the service if it shuts down. This is the recomended approach for reloading configs, as it is always guaranteed that everything will be re-initialized according to the new configuration.CALLBACKis similar toREPLACE_ATTRIBUTEwith one difference. After replacing theconfigattribute on the extractor object, it will call thereload_config_callbackmethod, which you will have to override in your subclass. This method should then do any necessary cleanup or re-initialization needed for your particular extractor.
To enable detection of config changes, set the reload_config_action argument
to the Extractor constructor to your chosen action:
# Using run handle:
with Extractor(
name="my_extractor",
description="Short description of my extractor",
config_class=MyConfig,
version="1.2.3",
run_handle=run_extractor,
reload_config_action=ReloadConfigAction.SHUTDOWN,
) as extractor:
extractor.run()
# Using subclass:
class MyExtractor(Extractor):
def __init__(self):
super().__init__(
name="my_extractor",
description="Short description of my extractor",
config_class=MyConfig,
version="1.2.3",
reload_config_action=ReloadConfigAction.SHUTDOWN,
)The extractor will then periodically check if the config file has changed. The
default interval is 5 minutes, you can change this by setting the
reload_config_interval attribute. As with any other interval in
extractor-utils, the unit is seconds.
When using remote configuration, you will still need to configure the extractor
with some minimal parameters - namely a CDF project, credentials for that
project, and an extraction pipeline ID to fetch configs from. There is also a
new global config field named type which is either remote or local (which
is the default).
An example for this minimal config follows:
type: remote
cognite:
# Read these from environment variables
host: ${COGNITE_BASE_URL}
project: ${COGNITE_PROJECT}
idp-authentication:
token-url: ${COGNITE_TOKEN_URL}
client-id: ${COGNITE_CLIENT_ID}
secret: ${COGNITE_CLIENT_SECRET}
scopes:
- ${COGNITE_BASE_URL}/.default
extraction-pipeline:
external-id: my-extraction-pipeline
The config file stored in CDF should omit all of these fields, as they will be
overwritten by the ConfigResolver to be the values given here. In other words,
for most extractor deployments, you should be able to leave out the cognite
field in the config stored in CDF.
.envfiles will now be loaded if present at runtime- Check that a configured extraction pipeline actually exists, and report an appropriate error if not.
- A few type hints in retry module were more restrictive than needed (such as
requiring
intwhenfloatwould work). - Gracefully handle wrongful data in state stores. If JSON parsing fails, use an empty state store as default.
- Exception messages for
InvalidConfigErrors have been improved, and when using the extractor base class it will print them in a formatted way instead of dumping a stack trace on invalid configs.
- Use Optional with defaults in code instead of dataclass defaults in
UploaderExtractorConfig, as this allows non-default config sections in subclasses.
- Include defaults for queue sizes in
UploaderExtractor
- Allow wider ranges for certain dependencies
- Allow
>=3.7.4, <5for typing-extensions - Allow
>=5.3.0, <7for PyYAML
- Allow
uploader_extractoranduploader_typesmodules used to create extractors writing to events, timeseries, or raw, by calling a common method. This is primarily used for the utils extensions, but can be useful for very simple extractors in general.
- Fixed a signature bug in
SequenceUploadQueue's__enter__method preventing it to be used as a context.
- Fixed an issue where the base class would not always load a default
LocalStateStoreif requested
- A
get_current_state_storeclass method on the Extractor class which returns the most recent state store loaded
- Fixed retries to not block the GIL and respect the cancellation token
- A
get_current_configclass method on the Extractor class which returns the most recent config file read
- Option to not handle
SIGINTs gracefully - Configurable heartbeat interval
- Use
cognite-sdk-coreas base instead ofcognite-sdk - Update Arrow to
>1.0.0
To update your projects to 2.0.0 there are two things to consider:
-
If you are specifying a version of the Cognite SDK to use in your dependency list, you must now specify
cognite-sdk-coreinstead. Otherwise you might install both versions. E.g. If yourpyproject.tomllooks like this:cognite-sdk = "^2.32.0" cognite-extractor-utils = "^1.3.3"
You must also change the
cognite-sdkdependency when updatingextractor-utils:cognite-sdk-core = "^2.32.0" cognite-extractor-utils = "^2.0.0"
-
If your extractor is using the included
Arrowpackage, there are a few breaking changes (most notibly that thetimestampattribute is renamed toint_timestamp). Consult their migration guide to make sure you are in compliance.
- Allow environment variable substitution in bool type config fields without breaking generic environment variable substitution
- Reverts 1.6.0 as it broke generic environment variable substitution
- Allow environment variable substitution in bool type config fields
- Make Cognite SDK timeout configurable
- Add a base class for extractors to remove a lot of boilerplate code necesarry for startup/shutdown, initialization etc.
- Allow using
Enums in config classes
- An
ensure_assetsfunction similar toensure_timeseries - A
BytesUploadQueuethat takes byte arrays directly instead of file paths - A
throttled_loopgenerator that iterates at most everynseconds - An
EitherIdConfigto configure e.g. data sets with either an id or external id in the same field.
- Never call
/login/statusas the endpoint is deprecated
- Inlcude missing classes and modules in docs
- Using
dataset-idordataset-external-idfields, use the newEitherIdConfiginstead.
- Add a base class for extractors to remove a lot of boilerplate code necesarry for startup/shutdown, initialization etc.
- Fix an issue in
SequenceUploadQueue.add_to_upload_queuewhen adding multiple rows to the same id
- Changed bucket sizes for observed times in uploader metrics to be more suited for expected values.
- Option to provide additional custum args to token fetching (via the
token_custom_argsarg to theCogniteClientconstructor)
- Use token fetching from Cognite SDK instead of our own implementation
Authenticatorclass
- Add config parameter to enable metrics of log messages. It reveals how many logging events happened per logger and log-level.
- Reduce cardinality (by reducing label count) on autogenerated metrics. E.g.
don't label
TIMESERIES_UPLOADER_POINTS_WRITTENwith which time series it's writing to.
- Add option to specify external IDs for data sets
- Fix labels for generated Prometheus
- Add a cancellation token to all data uploaders and metrics pushers
- Add various automatic Prometheus metrics for data uploaders
- Fix URL creation in OAuth flow
- Fix a pool issue in FileUploadQueue
- General OIDC token support
- Add option to authenticate to CDF with AAD
- Add upload queues for sequences and files
- TimeSeriesUploadQueue can now auto-create string time series
- Add option to create missing time series from upload queue
- Fixed an issue in EitherId where the
reprmethod didn't return a string (as would be expected).
- Add py.typed file so mypy knows that package is typed
- Don't require auth for prometheus push gateways as push gateways can be configured to allow unauthorized access.
- Fixed a bug where the state store config would not allow raw state stores
without explicit
nullon local
- An outside_state method to test if a new proposed state in state stores is covered or not
- A general SIGINT handler
- Several minor additions to configtools: Defaults in StateStoreConfig, optional dataset ID in CogniteConfig, option to have version as int or None
- Add a metrics factory that caches instances
- A concurrency issue with TimeSeriesUploadQueue where uploads could fail if points were added at the very start of the upload call
- Fix documentation build
Release the first stable version. Open source the library under the Apache 2.0 license