shortcuts
Commands:
Usage | Description |
---|---|
Attach terminal to a job | |
Copy files and directories | |
Execute command in a running job | |
List images | |
Kill job(s) | |
Log into Neuro Platform | |
Log out | |
Print the logs for a job | |
List directory contents | |
Make directories | |
Move or rename files and directories | |
Forward port(s) of a job | |
List all jobs | |
Pull an image from platform registry | |
Push an image to platform registry | |
Remove files or directories | |
Run a job | |
Save job's state to an image | |
Shares resource with another user | |
Display status of a job | |
Display GPU/CPU/Memory usage |
Attach terminal to a job
neuro attach [OPTIONS] JOB
Attach terminal to a job
Attach local standard input, output, and error streams to a running job.
Name | Description |
---|---|
--help | Show this message and exit. |
--port-forward LOCAL_PORT:REMOTE_RORT | Forward port(s) of a running job to local port(s) (use multiple times for forwarding several ports) |
Copy files and directories
neuro cp [OPTIONS] [SOURCES]... [DESTINATION]
Copy files and directories.
Either
SOURCES
or DESTINATION
should have storage:// scheme. If scheme is omitted, file:// scheme is assumed.Use /dev/stdin and /dev/stdout file names to copy a file from terminal and print the content of file on the storage to console.
Any number of --exclude and --include options can be passed. The filters that appear later in the command take precedence over filters that appear earlier in the command. If neither --exclude nor --include options are specified the default can be changed using the storage.cp-exclude configuration variable documented in "neuro help user- config".
# copy local files into remote storage root
$ neuro cp foo.txt bar/baz.dat storage:
$ neuro cp foo.txt bar/baz.dat -t storage:
# copy local directory `foo` into existing remote directory `bar`
$ neuro cp -r foo -t storage:bar
# copy the content of local directory `foo` into existing remote
# directory `bar`
$ neuro cp -r -T storage:foo storage:bar
# download remote file `foo.txt` into local file `/tmp/foo.txt` with
# explicit file:// scheme set
$ neuro cp storage:foo.txt file:///tmp/foo.txt
$ neuro cp -T storage:foo.txt file:///tmp/foo.txt
$ neuro cp storage:foo.txt file:///tmp
$ neuro cp storage:foo.txt -t file:///tmp
# download other project's remote file into the current directory
$ neuro cp storage:/{project}/foo.txt .
# download only files with extension `.out` into the current directory
$ neuro cp storage:results/*.out .
Name | Description |
---|---|
--help | Show this message and exit. |
--continue | Continue copying partially-copied files. |
--exclude-from-files FILES | A list of file names that contain patterns for exclusion files and directories. Used only for uploading. The default can be changed using the storage.cp-exclude-from-files configuration variable documented in "neuro help user-config" |
--exclude TEXT | Exclude files and directories that match the specified pattern. |
--include TEXT | Don't exclude files and directories that match the specified pattern. |
--glob / --no-glob | Expand glob patterns in SOURCES with explicit scheme. [default: glob] |
-T, --no-target-directory | Treat DESTINATION as a normal file. |
-p, --progress / -P, --no-progress | Show progress, on by default in TTY mode, off otherwise. |
-r, --recursive | Recursive copy, off by default |
-t, --target-directory DIRECTORY | Copy all SOURCES into DIRECTORY. |
-u, --update | Copy only when the SOURCE file is newer than the destination file or when the destination file is missing. |
Execute command in a running job
neuro exec [OPTIONS] JOB -- CMD...
Execute command in a running job.
# Provides a shell to the container:
$ neuro exec my-job -- /bin/bash
# Executes a single command in the container and returns the control:
$ neuro exec --no-tty my-job -- ls -l
Name | Description |
---|---|
--help | Show this message and exit. |
-t, --tty / -T, --no-tty | Allocate a TTY, can be useful for interactive jobs. By default is on if the command is executed from a terminal, non-tty mode is used if executed from a script. |
List images
neuro images [OPTIONS]
List images.
Name | Description |
---|---|
--help | Show this message and exit. |
--all-orgs | Show images in all orgs. |
--all-projects | Show images in all projects. |
--cluster CLUSTER | Show images on a specified cluster (the current cluster by default). |
-l | List in long format. |
--full-uri | Output full image URI. |
-n, --name PATTERN | Filter out images by name regex. |
--org ORG | Filter out images by org (multiple option, the current org by default). |
--project PROJECT | Filter out images by project (multiple option, the current project by default). |
Kill job(s)
neuro kill [OPTIONS] JOBS...
Kill job(s).
Name | Description |
---|---|
--help | Show this message and exit. |
Log into Neuro Platform
neuro login [OPTIONS] [URL]
Log into Neuro Platform.
URL
is a platform entrypoint URL
.Name | Description |
---|---|
--help | Show this message and exit. |
Log out
neuro logout [OPTIONS]
Log out.
Name | Description |
---|---|
--help | Show this message and exit. |
Print the logs for a job
neuro logs [OPTIONS] JOB
Print the logs for a job.
Name | Description |
---|---|
--help | Show this message and exit. |
--since DATE_OR_TIMEDELTA | Only return logs after a specific date (including). Use value of format '1d2h3m4s' to specify moment in past relatively to current time. |
--timestamps | Include timestamps on each line in the log output. |
List directory contents
neuro ls [OPTIONS] [PATHS]...
List directory contents.
By default
PATH
is equal project's dir (storage:)Name | Description |
---|---|
--help | Show this message and exit. |
-d, --directory | list directories themselves, not their contents. |
-l | use a long listing format. |
-h, --human-readable | with -l print human readable sizes (e.g., 2K, 540M). |
-a, --all | do not ignore entries starting with . |
--sort [name | size | time] | sort by given field, default is name. |
Make directories
neuro mkdir [OPTIONS] PATHS...
Make directories.
Name | Description |
---|---|
--help | Show this message and exit. |
-p, --parents | No error if existing, make parent directories as needed |
Move or rename files and directories
neuro mv [OPTIONS] [SOURCES]... [DESTINATION]
Move or rename files and directories.
SOURCE
must contain path to the file or directory existing on the storage, and DESTINATION
must contain the full path to the target file or directory.
# move and rename remote file
$ neuro mv storage:foo.txt storage:bar/baz.dat
$ neuro mv -T storage:foo.txt storage:bar/baz.dat
# move remote files into existing remote directory
$ neuro mv storage:foo.txt storage:bar/baz.dat storage:dst
$ neuro mv storage:foo.txt storage:bar/baz.dat -t storage:dst
# move the content of remote directory into other existing
# remote directory
$ neuro mv -T storage:foo storage:bar
# move remote file into other project's directory
$ neuro mv storage:foo.txt storage:/{project}/bar.dat
# move remote file from other project's directory
$ neuro mv storage:/{project}/foo.txt storage:bar.dat
Name | Description |
---|---|
--help | Show this message and exit. |
--glob / --no-glob | Expand glob patterns in SOURCES [default: glob] |
-T, --no-target-directory | Treat DESTINATION as a normal file |
-t, --target-directory DIRECTORY | Copy all SOURCES into DIRECTORY |
Forward port(s) of a job
neuro port-forward [OPTIONS] JOB LOCAL_PORT:REMOTE_RORT...
Forward port(s) of a job.
Forwards port(s) of a running job to local port(s).
# Forward local port 2080 to port 80 of job's container.
# You can use http://localhost:2080 in browser to access job's served http
$ neuro job port-forward my-fastai-job 2080:80
# Forward local port 2222 to job's port 22
# Then copy all data from container's folder '/data' to current folder
# (please run second command in other terminal)
$ neuro job port-forward my-job-with-ssh-server 2222:22
$ rsync -avxzhe ssh -p 2222 root@localhost:/data .
# Forward few ports at once
$ neuro job port-forward my-job 2080:80 2222:22 2000:100
Name | Description |
---|---|
--help | Show this message and exit. |
List all jobs
neuro ps [OPTIONS]
List all jobs.
$ neuro ps -a
$ neuro ps -a --owner=user-1 --owner=user-2
$ neuro ps --name my-experiments-v1 -s failed -s succeeded
$ neuro ps --description=my favourite job
$ neuro ps -s failed -s succeeded -q
$ neuro ps -t tag1 -t tag2
Name | Description |
---|---|
--help | Show this message and exit. |
-a, --all | Show all jobs regardless the status. |
--all-orgs | Show jobs in all orgs. |
--all-projects | Show jobs in all projects. |
--cluster CLUSTER | Show jobs on a specified cluster (the current cluster by default). |
-d, --description DESCRIPTION | Filter out jobs by description (exact match). |
--distinct | Show only first job if names are same. |
--format COLUMNS | Output table format, see "neuro help ps-format" for more info about the format specification. The default can be changed using the job.ps-format configuration variable documented in "neuro help user-config" |
--full-uri | Output full image URI. |
-n, --name NAME | Filter out jobs by name. |
--org ORG | Filter out jobs by org name (multiple option, the current org by default). |
-o, --owner TEXT | Filter out jobs by owner (multiple option). Supports ME option to filter by the current user. |
-p, --project PROJECT | Filter out jobs by project name (multiple option, the current project by default). |
--recent-first / --recent-last | Show newer jobs first or last |
--since DATE_OR_TIMEDELTA | Show jobs created after a specific date (including). Use value of format '1d2h3m4s' to specify moment in past relatively to current time. |
-s, --status [pending | suspended | running | succeeded | failed | cancelled] | Filter out jobs by status (multiple option). |
-t, --tag TAG | Filter out jobs by tag (multiple option) |
--until DATE_OR_TIMEDELTA | Show jobs created before a specific date (including). Use value of format '1d2h3m4s' to specify moment in past relatively to current time. |
-w, --wide | Do not cut long lines for terminal width. |
Pull an image from platform registry
neuro pull [OPTIONS] REMOTE_IMAGE [LOCAL_IMAGE]
Pull an image from platform registry.
Remote image name must be
URL
with image:// scheme. Image names can contain tag.
$ neuro pull image:myimage
$ neuro pull image:/other-project/alpine:shared
$ neuro pull image:/project/my-alpine:production alpine:from-registry
Name | Description |
---|---|
--help | Show this message and exit. |
Push an image to platform registry
neuro push [OPTIONS] LOCAL_IMAGE [REMOTE_IMAGE]
Push an image to platform registry.
Remote image must be
URL
with image:// scheme. Image names can contain tag. If tags not specified 'latest' will be used as value.
$ neuro push myimage
$ neuro push alpine:latest image:my-alpine:production
$ neuro push alpine image:/other-project/alpine:shared
Name | Description |
---|---|
--help | Show this message and exit. |
Remove files or directories
neuro rm [OPTIONS] PATHS...
Remove files or directories.
$ neuro rm storage:foo/bar
$ neuro rm storage:/{project}/foo/bar
$ neuro rm storage://{cluster}/{project}/foo/bar
$ neuro rm --recursive storage:/{project}/foo/
$ neuro rm storage:foo/**/*.tmp
Name | Description |
---|---|
--help | Show this message and exit. |
--glob / --no-glob | Expand glob patterns in PATHS [default: glob] |
-p, --progress / -P, --no-progress | Show progress, on by default in TTY mode, off otherwise. |
-r, --recursive | remove directories and their contents recursively |
Run a job
neuro run [OPTIONS] IMAGE [-- CMD...]
Run a job
IMAGE
docker image name to run in a job.CMD
list will be passed as arguments to the executed job's image.
# Starts a container pytorch/pytorch:latest on a machine with smaller GPU resources
# (see exact values in `neuro config show`) and with two volumes mounted:
# storage:/<home-directory> --> /var/storage/home (in read-write mode),
# storage:/neuromation/public --> /var/storage/neuromation (in read-only mode).
$ neuro run --preset=gpu-small --volume=storage::/var/storage/home:rw \
$ --volume=storage:/neuromation/public:/var/storage/home:ro pytorch/pytorch:latest
# Starts a container using the custom image my-ubuntu:latest stored in neuro
# registry, run /script.sh and pass arg1 and arg2 as its arguments:
$ neuro run -s cpu-small --entrypoint=/script.sh image:my-ubuntu:latest -- arg1 arg2
Name | Description |
---|---|
--help | Show this message and exit. |
--browse | Open a job's URL in a web browser |
--cluster CLUSTER | Run job in a specified cluster |
-d, --description DESC | Optional job description in free format |
--detach | Don't attach to job logs and don't wait for exit code |
--energy-schedule NAME | Run job only within a selected energy schedule. Selected preset should have scheduler enabled. |
--entrypoint TEXT | Executable entrypoint in the container (note that it overwrites ENTRYPOINT and CMD instructions of the docker image) |
-e, --env VAR=VAL | Set environment variable in container. Use multiple options to define more than one variable. See neuro help secrets for information about passing secrets as environment variables. |
--env-file PATH | File with environment variables to pass |
-x, --extshm / -X, --no-extshm | Request extended '/dev/shm' space [default: x] |
--http-auth / --no-http-auth | Enable HTTP authentication for forwarded HTTP port [default: True] |
--http-port PORT | Enable HTTP port forwarding to container [default: 80] |
--life-span TIMEDELTA | Optional job run-time limit in the format '1d2h3m4s' (some parts may be missing). Set '0' to disable. Default value '1d' can be changed in the user config. |
-n, --name NAME | Optional job name |
--org ORG | Run job in a specified org |
--pass-config / --no-pass-config | Upload neuro config to the job [default: no-pass-config] |
--port-forward LOCAL_PORT:REMOTE_RORT | Forward port(s) of a running job to local port(s) (use multiple times for forwarding several ports) |
-s, --preset PRESET | Predefined resource configuration (to see available values, run neuro config show ) |
--priority [low | normal | high] | Priority used to specify job's start order. Jobs with higher priority will start before ones with lower priority. Priority should be supported by cluster. |
--privileged | Run job in privileged mode, if it is supported by cluster. |
-p, --project PROJECT | Run job in a specified project. |
--restart [never | on-failure | always] | Restart policy to apply when a job exits [default: never] |
--schedule-timeout TIMEDELTA | Optional job schedule timeout in the format '3m4s' (some parts may be missing). |
--share USER | Share job write permissions to user or role. |
--tag TAG | Optional job tag, multiple values allowed |
-t, --tty / -T, --no-tty | Allocate a TTY, can be useful for interactive jobs. By default is on if the command is executed from a terminal, non-tty mode is used if executed from a script. |
-v, --volume MOUNT | Mounts directory from vault into container. Use multiple options to mount more than one volume. See neuro help secrets for information about passing secrets as mounted files. |
--wait-for-seat / --no-wait-for-seat | Wait for total running jobs quota [default: no-wait-for-seat] |
--wait-start / --no-wait-start | Wait for a job start or failure [default: wait-start] |
-w, --workdir TEXT | Working directory inside the container |
Save job's state to an image
neuro save [OPTIONS] JOB IMAGE
Save job's state to an image.
$ neuro job save job-id image:ubuntu-patched
$ neuro job save my-favourite-job image:ubuntu-patched:v1
$ neuro job save my-favourite-job image://bob/ubuntu-patched
Name | Description |
---|---|
--help | Show this message and exit. |
Shares resource with another user
neuro share [OPTIONS] URI USER {read|write|manage}
Shares resource with another user.
URI
shared resource.USER
username to share resource with.PERMISSION
sharing access right: read, write, or manage.$ neuro acl grant storage:///sample_data/ alice manage
$ neuro acl grant image:resnet50 bob read
$ neuro acl grant job:///my_job_id alice write
Name | Description |
---|---|
--help | Show this message and exit. |
Display status of a job
neuro status [OPTIONS] JOB
Display status of a job.
Name | Description |
---|---|
--help | Show this message and exit. |
--full-uri | Output full URI. |
Display GPU/CPU/Memory usage
neuro top [OPTIONS] [JOBS]...
Display
GPU
/CPU
/Memory usage.
$ neuro top
$ neuro top job-1 job-2
$ neuro top --owner=user-1 --owner=user-2
$ neuro top --name my-experiments-v1
$ neuro top -t tag1 -t tag2
Name | Description |
---|---|
--help | Show this message and exit. |
--cluster CLUSTER | Show jobs on a specified cluster (the current cluster by default). |
-d, --description DESCRIPTION | Filter out jobs by description (exact match). |
--format COLUMNS | Output table format, see "neuro help top-format" for more info about the format specification. The default can be changed using the job.top-format configuration variable documented in "neuro help user-config" |
--full-uri | Output full image URI. |
-n, --name NAME | Filter out jobs by name. |
-o, --owner TEXT | Filter out jobs by owner (multiple option). Supports ME option to filter by the current user. Specify ALL to show jobs of all users. |
-p, --project PROJECT | Filter out jobs by project name (multiple option). |
--since DATE_OR_TIMEDELTA | Show jobs created after a specific date (including). Use value of format '1d2h3m4s' to specify moment in past relatively to current time. |
--sort COLUMNS | Sort rows by specified column. Add "-" prefix to revert the sorting order. Multiple columns can be specified (comma separated). [default: cpu] |
-t, --tag TAG | Filter out jobs by tag (multiple option) |
--timeout FLOAT | Maximum allowed time for executing the command, 0 for no timeout [default: 0] |
--until DATE_OR_TIMEDELTA | Show jobs created before a specific date (including). Use value of format '1d2h3m4s' to specify moment in past relatively to current time. |
Last modified 5mo ago