Advanced
Command Line Interface (CLI) Usage
Beyond just initializing a project and building datafiles, Featurevisor CLI can be used for a few more purposes.
Installation
Use npx
to initialize a project first:
$ mkdir my-featurevisor-project && cd my-featurevisor-project
$ npx @featurevisor/cli init
If you wish to initialize a specific example as available in the monorepo:
$ npx @featurevisor/cli init --example <name>
After you have installed the dependencies in the project:
$ npm install
You can access the Featurevisor CLI from inside the project via:
$ npx featurevisor
Learn more in Quick start.
Linting YAMLs
Check if the YAML files have any syntax or structural errors:
$ npx featurevisor lint
Lear more in Linting.
Building datafiles
Generate JSON files on a per environment and tag combination as exists in project configuration:
$ npx featurevisor build
Learn more in Building datafiles.
Testing
Test your features and segments:
$ npx featurevisor test
Learn more in Testing.
Restore state files
Building datafiles also generates state files.
To restore them to last known state in Git, run:
$ npx featurevisor restore
Generate static site
Build the site:
$ npx featurevisor site export
Serve the built site (defaults to port 3000):
$ npx featurevisor site serve
Serve it in a specific port:
$ npx featurevisor site serve -p 3000
Learn more in Status site.
Generate code
Generate TypeScript code from your YAMLs:
$ npx featurevisor generate-code --language typescript --out-dir ./src
See output in ./src
directory.
Learn more in code generation page.
Find duplicate segments
It is possible to end up with multiple segments having same conditions in larger projects. This is not a problem per se, but we should be aware of it.
We can find these duplicates early on by running:
$ npx featurevisor find-duplicate-segments
If we want to know the names of authors who worked on the duplicate segments, we can pass --authors
:
$ npx featurevisor find-duplicate-segments --authors
Find usage
Learn where/if certain segments and attributes are used in.
For each of the find-usage
commands below, you can optionally pass --authors
to find who worked on the affected entities.
Segment usage
$ npx featurevisor find-usage --segment=my_segment
Attribute usage
$ npx featurevisor find-usage --attribute=my_attribute
Unused segments
$ npx featurevisor find-usage --unusedSegments
Unused attributes
$ npx featurevisor find-usage --unusedAttributes
Benchmarking
You can measure how fast or slow your SDK evaluations are for particular features.
The --n
option is used to specify the number of iterations to run the benchmark for.
Feature
To benchmark evaluating a feature itself if it is enabled or disabled via SDK's .isEnabled()
method against provided context:
$ npx featurevisor benchmark \
--environment=production \
--feature=my_feature \
--context='{"userId": "123"}' \
--n=1000
Variation
To benchmark evaluating a feature's variation via SDKs's .getVariation()
method:
$ npx featurevisor benchmark \
--environment=production \
--feature=my_feature \
--variation \
--context='{"userId": "123"}' \
--n=1000
Variable
To benchmark evaluating a feature's variable via SDKs's .getVariable()
method:
$ npx featurevisor benchmark \
--environment=production \
--feature=my_feature \
--variable=my_variable_key \
--context='{"userId": "123"}' \
--n=1000
Configuration
To view the project configuration:
$ npx featurevisor config
Printing configuration as JSON:
$ npx featurevisor config --print --pretty
Evaluate
To learn why certain values (like feature and its variation or variables) are evaluated as they are against provided context:
$ npx featurevisor evaluate \
--environment=production \
--feature=my_feature \
--context='{"userId": "123", "country": "nl"}'
This will show you full evaluation details helping you debug better in case of any confusion.
It is similar to logging in SDKs with debug
level. But here instead, we are doing it at CLI directly in our Featurevisor project without having to involve our application(s).
If you wish to print the evaluation details in plain JSON, you can pass --print
at the end:
$ npx featurevisor evaluate \
--environment=production \
--feature=my_feature \
--context='{"userId": "123", "country": "nl"}' \
--print \
--pretty
The --pretty
flag is optional.
To print further logs in a more verbose way, you can pass --verbose
:
$ npx featurevisor evaluate \
--environment=production \
--feature=my_feature \
--context='{"userId": "123", "country": "nl"}' \
--verbose
Assess distribution
To check if the gradual rollout of a feature and the weight distribution of its variations (if any exists) are going to work as expected in a real world application with real traffic against provided context, we can imitate that by running:
$ npx featurevisor assess-distribution \
--environment=production \
--feature=my_feature \
--context='{"country": "nl"}' \
--populateUuid=userId \
--n=1000
The --n
option controls the number of iterations to run, and the --populateUuid
option is used to simulate different users in each iteration in this particular case.
Further details about all the options:
--environment
: the environment name--feature
: the feature key--context
: the common context object in stringified form--populateUuid
: attribute key that should be populated with a new UUID, and merged with provided context.- You can pass multiple attributes in your command:
--populateUuid=userId --populateUuid=deviceId
- You can pass multiple attributes in your command:
--n
: the number of iterations to run the assessment for- The higher the number, the more accurate the distribution will be
--verbose
: print the merged context for better debugging
Everything is happening locally in memory without modifying any content anywhere. This command exists only to add to our confidence if questions arise about how effective traffic distribution in Featurevisor is.
Info
Shows count of various entities in the project:
$ npx featurevisor info
Version
Get the current version number of Featurevisor CLI, and its relevant packages:
$ npx featurevisor --version
Or do:
$ npx featurevisor -v