Advanced
Command Line Interface (CLI) Usage
Beyond just initializing a project and building datafiles, Featurevisor CLI can be used for a few more purposes.
Installation#
Use npx to initialize a project first:
$ mkdir my-featurevisor-project && cd my-featurevisor-project$ npx @featurevisor/cli initIf you wish to initialize a specific example as available in the monorepo:
$ npx @featurevisor/cli init --example=jsonAfter you have installed the dependencies in the project:
$ npm installYou can access the Featurevisor CLI from inside the project via:
$ npx featurevisorLearn more in Quick start.
Linting#
Check if the definition files have any syntax or structural errors:
$ npx featurevisor lintLear more in Linting.
Building datafiles#
Generate JSON files on a per environment and tag combination as exists in project configuration:
$ npx featurevisor buildLearn more in Building datafiles.
Testing#
Test your features and segments:
$ npx featurevisor testLearn more in Testing.
Generate static site#
Build the site:
$ npx featurevisor site exportServe the built site (defaults to port 3000):
$ npx featurevisor site serveServe it in a specific port:
$ npx featurevisor site serve -p 3000Learn more in Status site.
Generate code#
Generate TypeScript code from feature definitions:
$ npx featurevisor generate-code --language typescript --out-dir ./srcSee output in ./src directory.
Learn more in code generation page.
Find duplicate segments#
It is possible to end up with multiple segments having same conditions in larger projects. This is not a problem per se, but we should be aware of it.
We can find these duplicates early on by running:
$ npx featurevisor find-duplicate-segmentsIf we want to know the names of authors who worked on the duplicate segments, we can pass --authors:
$ npx featurevisor find-duplicate-segments --authorsFind usage#
Learn where/if certain segments and attributes are used in.
For each of the find-usage commands below, you can optionally pass --authors to find who worked on the affected entities.
Segment usage#
$ npx featurevisor find-usage --segment=my_segmentAttribute usage#
$ npx featurevisor find-usage --attribute=my_attributeUnused segments#
$ npx featurevisor find-usage --unusedSegmentsUnused attributes#
$ npx featurevisor find-usage --unusedAttributesFeature usage#
$ npx featurevisor find-usage --feature=my_featureBenchmarking#
You can measure how fast or slow your SDK evaluations are for particular features.
The --n option is used to specify the number of iterations to run the benchmark for.
Feature#
To benchmark evaluating a feature itself if it is enabled or disabled via SDK's .isEnabled() method against provided context:
$ npx featurevisor benchmark \ --environment=production \ --feature=my_feature \ --context='{"userId": "123"}' \ --n=1000Variation#
To benchmark evaluating a feature's variation via SDKs's .getVariation() method:
$ npx featurevisor benchmark \ --environment=production \ --feature=my_feature \ --variation \ --context='{"userId": "123"}' \ --n=1000Variable#
To benchmark evaluating a feature's variable via SDKs's .getVariable() method:
$ npx featurevisor benchmark \ --environment=production \ --feature=my_feature \ --variable=my_variable_key \ --context='{"userId": "123"}' \ --n=1000You can optionally pass --schema-version=2 if you are using the new schema v2.
Configuration#
To view the project configuration:
$ npx featurevisor configPrinting configuration as JSON:
$ npx featurevisor config --json --prettyEvaluate#
To learn why certain values (like feature and its variation or variables) are evaluated as they are against provided context:
$ npx featurevisor evaluate \ --environment=production \ --feature=my_feature \ --context='{"userId": "123", "country": "nl"}'This will show you full evaluation details helping you debug better in case of any confusion.
It is similar to logging in SDKs with debug level. But here instead, we are doing it at CLI directly in our Featurevisor project without having to involve our application(s).
If you wish to print the evaluation details in plain JSON, you can pass --json at the end:
$ npx featurevisor evaluate \ --environment=production \ --feature=my_feature \ --context='{"userId": "123", "country": "nl"}' \ --json \ --prettyThe --pretty flag is optional.
To print further logs in a more verbose way, you can pass --verbose:
$ npx featurevisor evaluate \ --environment=production \ --feature=my_feature \ --context='{"userId": "123", "country": "nl"}' \ --verboseYou can optionally pass --schema-version=2 if you are using the new schema v2.
List#
List features#
To list all features in the project:
$ npx featurevisor list --featuresAdvanced search options:
| Option | Description |
|---|---|
--archived=<true or false> | by archived status |
--description=<pattern> | by description pattern |
--disabledIn=<environment> | disabled in an environment |
--enabledIn=<environment> | enabled in an environment |
--json | print as JSON |
--keyPattern=<pattern> | by key pattern |
--tag=<tag> | by tag |
--variable=<variableKey> | containing specific variable key |
--variation=<variationValue> | containing specific variation key |
--with-tests | with test specs |
--with-variables | with variables |
--with-variations | with variations |
--without-tests | without any test specs |
--without-variables | without any variables |
--without-variations | without any variations |
List segments#
To list all segments in the project:
$ npx featurevisor list --segmentsAdvanced search options:
| Option | Description |
|---|---|
--archived=<true or false> | by archived status |
--description=<pattern> | by description pattern |
--json | print as JSON |
--keyPattern=<pattern> | by key pattern |
--pretty | pretty JSON |
--with-tests | with test specs |
--without-tests | without any test specs |
List attributes#
To list all attributes in the project:
$ npx featurevisor list --attributesAdvanced search options:
| Option | Description |
|---|---|
--archived=<true or false> | by archived status |
--description=<pattern> | by description pattern |
--json | print as JSON |
--keyPattern=<pattern> | by key pattern |
--pretty | pretty JSON |
List tests#
To list all tests specs in the project:
$ npx featurevisor list --testsAdvanced search options:
| Option | Description |
|---|---|
--applyMatrix | apply matrix for assertions |
--assertionPattern=<pattern> | by assertion's description pattern |
--json | print as JSON |
--keyPattern=<pattern> | by key pattern of feature or segment being tested |
--pretty | pretty JSON |
Assess distribution#
To check if the gradual rollout of a feature and the weight distribution of its variations (if any exists) are going to work as expected in a real world application with real traffic against provided context, we can imitate that by running:
$ npx featurevisor assess-distribution \ --environment=production \ --feature=my_feature \ --context='{"country": "nl"}' \ --populateUuid=userId \ --n=1000The --n option controls the number of iterations to run, and the --populateUuid option is used to simulate different users in each iteration in this particular case.
Further details about all the options:
--environment: the environment name--feature: the feature key--context: the common context object in stringified form--populateUuid: attribute key that should be populated with a new UUID, and merged with provided context.- You can pass multiple attributes in your command:
--populateUuid=userId --populateUuid=deviceId
- You can pass multiple attributes in your command:
--n: the number of iterations to run the assessment for- The higher the number, the more accurate the distribution will be
--verbose: print the merged context for better debugging
Everything is happening locally in memory without modifying any content anywhere. This command exists only to add to our confidence if questions arise about how effective traffic distribution in Featurevisor is.
Info#
Shows count of various entities in the project:
$ npx featurevisor infoVersion#
Get the current version number of Featurevisor CLI, and its relevant packages:
$ npx featurevisor --versionOr do:
$ npx featurevisor -v
