GraphQL
Many of the Shakudo Platform features are supported through the platform dashboard as well as GraphQL. GraphQL method is particularly useful when some actions are easier done algorithmically such as spinning up many jobs at once.
Below are some common graphql queries for submission, checking status, etc.
Common Use Cases
Get Sessions
Description
Retrieves a list of Sessions
query hyperhubSessions($limit: Int!, $email: String, $status: String, $imageType: String) {
hyperHubSessions(orderBy:{startTime: desc}, take: $limit, where: {
hyperplaneUserEmail: {equals: $email},
imageType: {equals: $imageType},
status: {equals: $status},
}) {
id
hyperplaneUserEmail
status
imageType
jLabUrl
notebookURI
estimatedCost
resourceCPUlimit
resourceRAMlimit
resourceCPUrequest
resourceRAMrequest
gpuRequest
startTime
}
countHyperHubSessions
}
Sample Variables
{
"limit": 10,
"email": "demo@shakudo.io",
"status": "in progress"
}
Parameters
Field | Type | Description |
---|---|---|
limit | Int! | The maximum number of records to show in the result. (required) |
String | Shakudo platform user email for the user who created the session | |
imageType | String | Name of the Shakudo platform EC. For example, "basic" |
status | String | Underlying Kubernetes job status |
Response Type
Array of HyperHubSessions
Sample Response
{
"data": {
"hyperHubSessions": [
{
"id": "49475b67-3f8f-43c1-9f42-7b2175d1e679",
"hyperplaneUserEmail": "demo@shakudo.io",
"status": "in progress",
"imageType": "basic",
"jLabUrl": "client.hyperplane.dev/jupyterlabUrl/",
"notebookURI": "ssh demo-pvc-entry@demo.dev",
"estimatedCost": null,
"resourceCPUlimit": null,
"resourceRAMlimit": null,
"resourceCPUrequest": null,
"resourceRAMrequest": null,
"gpuRequest": null,
"startTime": "2023-07-05T16:25:45.676Z"
}
],
"countHyperHubSessions": 22
}
}
Create Session
Description
Creates a Session
Creating using createHyperHubSession
parameters
query GetHyperplaneUserId($hyperplaneUserEmail: String!){
hyperplaneUsers(where: {email: {equals: $hyperplaneUserEmail}}) {
id
email
}
}
# billingProjectName optional
query GetBillingProjectId($billingProjectName: String){
billingProjects(where: {name: {equals: $billingProjectName}}) {
id
name
}
}
# userPvcName and displayName optional
query GetUserPvcId($userPvcName: String, $displayName: String){
userPvcs(where: {
pvcName: {equals: $userPvcName},
displayName: {equals: $displayName}
}) {
id
pvcName
displayName
}
}
mutation createSession(
$imageType: String!
$hyperplaneUserId: String!
$hyperplaneUserEmail: String!
$timeout: Int!
$collaborative: Boolean!
$imageHash: String!
$userPvcName: String = ""
$userPvc: UserPvcCreateNestedOneWithoutHyperHubSessionInput
$billingProjectId: String!
) {
createHyperHubSession(
data: {
imageType: $imageType
timeout: $timeout
collaborative: $collaborative
imageHash: $imageHash
group: ""
hyperplaneUser: { connect: { id: $hyperplaneUserId } }
billingProject: { connect: { id: $billingProjectId } }
userPvc: $userPvc
userPvcName: $userPvcName
hyperplaneUserEmail: $hyperplaneUserEmail
}
) {
id
hyperplaneUserEmail
status
imageType
jLabUrl
estimatedCost
resourceCPUlimit
resourceRAMlimit
resourceCPUrequest
resourceRAMrequest
gpuRequest
startTime
completionTime
timeout
group
billingProjectId
podSpec
}
}
Sample Variables
Retrieve $hyperplaneUserId
using GetHyperplaneUserId
and $billingProjectId
using GetBillingProjectId
Default Drive
{
"collaborative": false,
"imageType": "basic",
"imageHash": "",
"timeout": 900,
"hyperplaneUserEmail": "demo@shakudo.io",
"hyperplaneUserId": "93c6c00a-14b7-4cf7-845d-70d9e779b2cd", # From GetHyperplaneUserId
"billingProjectId": "8359f1f9-2eca-465b-9ac5-7cdb0e97e73f" # From GetBillingProjectId
}
Custom Drive
{
"collaborative": false,
"imageType": "basic",
"imageHash": "",
"timeout": 900,
"userPvcName": "demo-user-pvc-name",
"displayName": "demo drive",
"hyperplaneUserId": "93c6c00a-14b7-4cf7-845d-70d9e779b2cd", # From GetHyperplaneUserId
"billingProjectId": "8359f1f9-2eca-465b-9ac5-7cdb0e97e73f" # From GetBillingProjectId
"userPvc": { "connect": { "id": "bb2eeed2-6032-4036-9e8f-e757235533bb" }}, # From GetUserPvcId
"hyperplaneUserEmail": "demo@shakudo.io"
}
Parameters
Field | Type | Definition |
---|---|---|
imageType | String! (required) | Name of Shakudo platform EC |
hyperplaneUserId | String! (required) | Shakudo platform user account ID |
hyperplaneUserEmail | String! (required) | Shakudo platform user account email |
collaborative | Boolean! (required) | Enables collaborative mode. Collaborative mode allows multiple users to work together in the same session environment. |
timeout | Int! (required) | The maximum time in seconds that the pipeline may run, starting from the moment of job submission. Default: -1, ie. never timeout; 86400 on dashboard |
imageHash | String! (required) | URL of custom image, "" if using a default image like basic |
userPvcName | String ("" if not provided) | Persistent volume name as found in Kubernetes. Typically includes the drive name found on the dashboard. Default: "" (empty string) which corresponds with default drive claim-{user-email} |
userPvc | UserPvc | Shakudo session persistent volume (drive) details. Can either provide identifiers to connect to an existing drive or can provide values to create a new drive. Default: not present, which corresponds with default drive claim-{user-email}. Note: userPvc ID must correspond with same userPvc as userPvcName. |
displayName | String | Drive (PVC) display name as visible on UI dashboard |
billingProjectId | String! (required) | ID for billing project that user would like Session costs to contribute. Can either provide identifiers to connect to an existing billing project or can provide values to create a new billing project. Can get from GetBillingProjectId. |
billingProjectName | String | Name of billing project as shown on UI dashboard |
Response Type
HyperHubSession
Sample Response
{
"data": {
"createHyperHubSession": {
"id": "0b8b90c7-b3d6-43d7-a34a-27a9b17521b4",
"hyperplaneUserEmail": "demo@shakudo.io",
"status": "pending",
"imageType": "basic",
"jLabUrl": null,
"estimatedCost": null,
"resourceCPUlimit": null,
"resourceRAMlimit": null,
"resourceCPUrequest": null,
"resourceRAMrequest": null,
"gpuRequest": null,
"startTime": "2023-07-06T21:14:29.245Z",
"completionTime": null,
"timeout": 900,
"group": "",
"billingProjectId": "bb2eeed2-6032-4036-9e8f-e757235533bb",
"podSpec": null
}
}
}
Creating using PodSpec JSON (getHyperhubSessionDefaultPodSpec
)
**Getting PodSpec JSON**
query GetHyperhubSessionPodSpec($imageType: String, $userPvcName: String, $userEmail: String!, $imageUrl: String) {
getHyperhubSessionPodSpec(
imageType: $imageType,
userPvcName: $userPvcName,
userEmail: $userEmail,
imageUrl: $imageUrl
)
}
Sample Variables
{
"imageType": "basic",
"userEmail": "demo@shakudo.io"
}
****Parameters****
Field | Type | Description |
---|---|---|
imageUrl | String | URL of custom image, same as imageHash |
userPvcName | String | Persistent volume name as found in Kubernetes. Typically includes the drive name found on the dashboard. Default: "" (empty string) which corresponds with default drive claim-{user-email} |
userEmail | String! (required) | Shakudo platform user account email |
imageType | String | Name of Shakudo platform Podspec/Image |
Creating Session with PodSpec JSON
query GetHyperplaneUserId($userEmail: String!){
hyperplaneUsers(where: {email: {equals: $userEmail}}) {
id
email
}
}
mutation CreateSessionWithPodSpecJSON(
$userEmail: String!
$hyperplaneUserId: String!
$userPvcName: String = ""
$podSpec: JSON
) {
createHyperHubSession(
data: {
hyperplaneUserEmail: $userEmail,
hyperplaneUser: { connect: { id: $hyperplaneUserId } },
userPvcName: $userPvcName
podSpec: $podSpec
}
) {
id
hyperplaneUserEmail
status
imageType
jLabUrl
estimatedCost
resourceCPUlimit
resourceRAMlimit
resourceCPUrequest
resourceRAMrequest
gpuRequest
startTime
completionTime
timeout
group
billingProjectId
}
}
Sample Variables
Note: podSpec
field contains result of GetHyperhubSessionPodSpec
and the corresponding getHyperHubSessionDefaultPodSpec
field in the query’s result object. ie.
{
"imageType": "basic",
"userEmail": "demo@shakudo.io",
"hyperplaneUserId": "bb2eeed2-6032-4036-9e8f-e757235533bb",
"podSpec": <getHyperHubSessionDefaultPodSpec result>
}
Parameters
Field | Type | Description |
---|---|---|
userEmail | String! (required) | Shakudo platform user account email |
hyperplaneUserId | String! (required) | Shakudo platform user account ID |
podSpec | JSON | Shakudo platform PodSpec config object as a JSON object, originates from getHyperHubSessionDefaultPodSpec |
userPvcName | String (”” if not provided) | Added as a parameter to align field in UI |
Response Type
HyperHubSession
Sample Response
{
"data": {
"createHyperHubSession": {
"id": "bb2eeed2-6032-4036-9e8f-e757235533bb",
"hyperplaneUserEmail": "demo@shakudo.io",
"status": "pending",
"imageType": "basic",
"jLabUrl": null,
"estimatedCost": null,
"resourceCPUlimit": null,
"resourceRAMlimit": null,
"resourceCPUrequest": null,
"resourceRAMrequest": null,
"gpuRequest": null,
"startTime": "2023-07-05T16:26:06.346Z",
"completionTime": null,
"timeout": -1,
"group": null,
"billingProjectId": null
}
}
}
Stop Session
mutation stopSession($id: String!) {
updateHyperHubSession(where: {id: $id}, data: {
status: {set: "cancelled"}
}) {
id
status
}
}
Sample Variables
{
"id": "9276a796-229f-4ede-a2cf-a7cf329dab6a"
}
Parameters
Field | Type | Description |
---|---|---|
id | String! (required) | Session ID |
Response Type
HyperHubSession
Sample Response
{
"data": {
"updateHyperHubSession": {
"id": "9276a796-229f-4ede-a2cf-a7cf329dab6a",
"status": "cancelled"
}
}
}
Count Sessions
Description
Count the number of sessions based on the filters provided by the parameters.
query CountHyperhubSessions($email: String, $imageType: String, $status: String) {
countHyperHubSessions(whereOveride: {
hyperplaneUserEmail: {equals: $email},
imageType: {equals: $imageType}
status: {equals: $status}
})
}
Sample Variables
{
"email": "demo@shakudo.io",
"imageType": "basic",
"status": "in progress"
}
Parameters
Field | Type | Description |
---|---|---|
String | Shakudo platform user email for the user who created the session | |
imageType | String | Name of the Shakudo platform Podspec/Image, e.g., "basic" |
status | String | The status of the Kubernetes job that runs the pipeline job |
Response Type
HyperHubSession
Sample Response
{
"data": {
"countHyperHubSessions": 1
}
}
Create a Pipeline Job using createPipelineJob
Parameters
Description
Creates a Shakudo platform job, which allows users to run task scripts using custom configurations, either immediately as an “Immediate job”, at scheduled intervals as a “Scheduled Job”, or indefinitely as a “Service”.
Immediate jobs: schedule = "immediate"
Scheduled jobs: schedule != "immediate"
, schedule is set to cron schedule expression, eg. * * * * *
for a job running every minute
Service: timeout
and activeTimeout
set to -1
, schedule="immediate"
and exposedPort != null
, particularly set to a valid port
mutation CreatePipelineJob(
$type: String!
$timeout: Int!
$activeTimeout: Int
$maxRetries: Int!
$yamlPath: String!
$exposedPort: String
$schedule: String
$parameters: ParameterCreateNestedManyWithoutPipelineJobInput
$gitServer: HyperplaneVCServerCreateNestedOneWithoutPipelineJobsInput
$hyperplaneUserEmail: String!
$branchName: String
) {
createPipelineJob(
data: {
jobType: $type
timeout: $timeout
activeTimeout: $activeTimeout
maxRetries: $maxRetries
pipelineYamlPath: $yamlPath
exposedPort: $exposedPort
parameters: $parameters
schedule: $schedule
hyperplaneVCServer: $gitServer
hyperplaneUserEmail: $hyperplaneUserEmail
branchName: $branchName
}
) {
id
jobName
pipelineYamlPath
schedule
status
statusReason
output
startTime
completionTime
daskDashboardUrl
timeout
outputNotebooksPath
activeTimeout
maxRetries
exposedPort
jobType
parameters {
key
value
}
}
}
Parameters
Field | Type | Description |
---|---|---|
type | String! (required) | Name of Shakudo platform Podspec/Image, default or custom. Example: "basic" |
timeout | Int! (required) | The maximum time in seconds that the pipeline may run, starting from the moment of job submission. Default: -1 (never timeout). Example: 86400 |
activeTimeout | Int | The maximum time in seconds that the pipeline may run once it is picked up. Default: -1 (never timeout). Example: 86400 |
maxRetries | Int! (required) | The maximum number of attempts to run your pipeline job before returning an error, even if timeouts are not reached. Default: 2 |
yamlPath | String | The relative path to the .yaml file used to run this pipeline job. Example: "example_notebooks/pipelines/python_hello_world_pipeline/pipeline.yaml" |
exposedPort | String | Only enabled for Shakudo Services. The port that Services use to expose the pod to other pods within the cluster. Its presence is a current indicator of whether a job is a Service. |
schedule | String | Either "immediate" for an immediate job or a cron schedule expression for a scheduled job at the specified interval. |
parameters | ParameterCreateNestedManyWithoutPipelineJobInput | Key-value pairs that can be used within the container environment |
gitServer | HyperplaneVCServerCreateNestedOneWithoutPipelineJobsInput | Git server object, retrievable by searching git servers by name (hyperplaneVCServers) and using resulting id in the following manner: { connect: { id: <gitServerId> } } |
hyperplaneUserEmail | String! (required) | Shakudo platform user email |
branchName | String | The name of the specific git branch that contains the pipeline YAML file and pipeline scripts. If commitID is not specified, the latest commit is used. If not specified, default branch is used. |
Please note that the exclamation mark !
indicates that the field is required.
Sample Variables
{
"type": "basic",
"timeout": 86400,
"activeTimeout": 86400,
"maxRetries": 2,
"yamlPath": "examples/pipeline.yaml",
"hyperplaneUserEmail": "demo@shakudo.io",
"branchName": "main"
}
Response Type
PipelineJob
Sample Response
{
"data": {
"createPipelineJob": {
"id": "7b728979-71b7-426c-9847-6fe3e29a6438",
"pipelineYamlPath": "examples/pipeline.yaml",
"schedule": "immediate",
"status": "pending",
"statusReason": null,
"output": null,
"startTime": "2023-06-30T16:03:42.668Z",
"completionTime": null,
"daskDashboardUrl": null,
"timeout": 86400,
"outputNotebooksPath": null,
"activeTimeout": 86400,
"maxRetries": 2,
"exposedPort": null,
"jobType": "basic",
"parameters": []
}
}
}
Create a Scheduled Job using createPipelineJob
Parameters
Description
Create a scheduled job by specifying a cron schedule. Use the following guide to create a suitable expression for a specific schedule.
Sample Variables
{
"type": "basic",
"timeout": 86400,
"maxRetries": 2,
"schedule": "* * * * *",
"yamlPath": "examples/pipeline.yaml",
"hyperplaneUserEmail": "demo@shakudo.io",
"branchName": "demo"
}
Response Type
PipelineJob
Sample Response
{
"data": {
"createPipelineJob": {
"id": "9276a796-229f-4ede-a2cf-a7cf329dab6a",
"pipelineYamlPath": "examples/pipeline.yaml",
"schedule": "* * * * *",
"status": "pending",
"statusReason": null,
"output": null,
"startTime": "2023-06-30T16:03:42.668Z",
"completionTime": null,
"daskDashboardUrl": null,
"timeout": 86400,
"outputNotebooksPath": null,
"activeTimeout": 86400,
"maxRetries": 2,
"exposedPort": null,
"jobType": "basic",
"parameters": []
}
}
}
Create a PipelineJob using PodSpec JSON (getPipelineJobPodSpec
)
Description
Create an immediate or scheduled job using a PodSpec JSON object that is customizable. Use the following guide to create a suitable expression for a specific schedule.
query GetPipelineJobPodSpec(
$parameters: ParametersInput
$gitServerName: String = ""
$noGitInit: Boolean = false
$imageUrl: String = ""
$userEmail: String!
$noHyperplaneCommands: Boolean = false
$commitId: String = ""
$branchName: String
$pipelineYamlPath: String = ""
$debuggable: Boolean = false
$jobType: String = ""
) {
getPipelineJobPodSpec(
parameters: $parameters
gitServerName: $gitServerName
noGitInit: $noGitInit
imageUrl: $imageUrl
userEmail: $userEmail
noHyperplaneCommands: $noHyperplaneCommands
pipelineYamlPath: $pipelineYamlPath
commitId: $commitId
branchName: $branchName
debuggable: $debuggable
jobType: $jobType
)
}
Sample Variables
{
"jobType": "basic",
"userEmail": "demo@shakudo.io",
"pipelineYamlPath": "examples/pipeline.yaml",
"branchName": "demo"
}
Parameters
Field | Type | Description |
---|---|---|
parameters | ParametersInput | List of key-value parameters that are injected into the Job environment and can be used as environment variables |
gitServerName | String ("" if not provided) | Git Server name, corresponds with name field in HyperplaneVCServer, which is the display name assigned on the dashboard |
noGitInit | Boolean (false if not provided) | False if git server is to be set up using default Shakudo platform workflow. Default: false |
imageUrl | String ("" if not provided) | If the image is custom, then the image URL can be provided |
userEmail | String! (required) | Shakudo platform user account email |
noHyperplaneCommands | Boolean | False if using default Shakudo platform commands on job creation. Required to use Shakudo platform jobs through the pipeline YAML, but not required if the image has its own setup. Default: false |
commitId | String ("" if not provided) | The commit ID with the versions of the pipeline YAML file and pipeline scripts wanted. Ensure that both are present if the commit ID is used. If left empty, assume that the latest commit on the branch is used |
branchName | String | The name of the specific git branch that contains the pipeline YAML file and pipeline scripts. If commitID is not specified, the latest commit is used. If not specified, default branch is used. |
pipelineYamlPath | String ("" if not provided) | The relative path to the .yaml file used to run this pipeline job |
debuggable | Boolean (false if not provided) | Whether to enable SSH-based debugging for the job, check the following tutorial for more details |
jobType | String ("" if not provided) | Name of Shakudo platform Podspec/Image, default or custom |
mutation CreatePipelineJob(
$jobName: String
$pipelineYamlPath: String!
$podSpec: JSON!
$schedule: String
$userEmail: String!
) {
createPipelineJob (data: {
jobName: $jobName
pipelineYamlPath: $pipelineYamlPath
podSpec: $podSpec
schedule: $schedule
hyperplaneUserEmail: $userEmail
}
) {
id
jobName
pipelineYamlPath
schedule
status
statusReason
output
startTime
completionTime
daskDashboardUrl
timeout
outputNotebooksPath
activeTimeout
maxRetries
exposedPort
jobType
parameters {
key
value
}
}
}
Field | Type | Definition |
---|---|---|
jobName | String | Plain display name of job viewable from the dashboard, not necessarily unique. |
pipelineYamlPath | String! | The relative path to the .yaml file used to run this pipeline job |
podSpec | JSON! | Shakudo platform PodSpec environment config object as JSON |
schedule | String | Either "immediate" for an immediate job or a cron schedule expression for a scheduled job. |
userEmail | String! | Shakudo user account email |
Sample Variables
podSpec
will be the result of GetPipelineJobPodSpec
from the field getPipelineJobPodSpec
{
"jobType": "basic",
"userEmail": "demo@shakudo.io",
"branchName": "demo",
"pipelineYamlPath": "examples/pipeline.yaml",
"jobName": "test-create-pipeline-job-with-podSpec",
"podSpec": <GetPipelineJobPodSpec getPipelineJobPodSpec field result>
}
Response Type
PipelineJob
Sample Response
{
"data": {
"createPipelineJob": {
"id": "9f8f0524-6d67-4996-bd45-8a2434d97c1f",
"pipelineYamlPath": "examples/pipeline.yaml",
"schedule": "* * * * *",
"status": "pending",
"statusReason": null,
"output": null,
"startTime": "2023-06-30T16:03:42.668Z",
"completionTime": null,
"daskDashboardUrl": null,
"timeout": 86400,
"outputNotebooksPath": null,
"activeTimeout": 86400,
"maxRetries": 2,
"exposedPort": null,
"jobType": "basic",
"parameters": []
}
}
}
Cancel a Pipeline Job
Description
Cancel a job (Stop job from running).
Find PipelineJob by jobName or another non-unique identifier, optional if user has ID
query ($jobName: String) {
pipelineJobs(where: {jobName: {equals: $jobName} }) {
id
pipelineYamlPath
schedule
status
statusReason
startTime
completionTime
timeout
outputNotebooksPath
activeTimeout
jobType
parameters {
key
value
}
}
}
Sample Variables
{
"jobName": "foo"
}
Parameters
Field | Type | Description |
---|---|---|
jobName | String | Plain display name of job viewable from the dashboard, not necessarily unique. |
Use PipelineJob ID to cancel the job
mutation ($id: String!) {
updatePipelineJob(where: {id: $id},
data: {
status: {set: "cancelled"}
}) {
id
}
}
Sample Variables
{
"id": "65e3a289-1371-4009-9fb3-c03bfbcbebd8"
}
Parameters
Field | Type | Description |
---|---|---|
id | String! (required) | Pipeline Job ID |
Response Type
PipelineJob
Sample Response
{
"data": {
"updatePipelineJob": {
"id": "65e3a289-1371-4009-9fb3-c03bfbcbebd8"
}
**** }
}
Get Job Status
query GetPipelineJobStatus($id: String!){
pipelineJob(where: {id: $id }) {
status
statusReason
}
}
Sample Variables
{
"id": "65e3a289-1371-4009-9fb3-c03bfbcbebd8"
}
Parameters
Field | Type | Description |
---|---|---|
id | String! (required) | Pipeline Job ID |
Response Type
PipelineJob
Sample Response
{
"data": {
"pipelineJob": {
"status": "done",
"statusReason": null
}
}
}
Get Job Status Statistics
Description
Count the number of jobs based on their statuses. For example, failed, pending, or cancelled jobs. The timeFrame parameter specifies the timeframe which will be considered.
For instance:
- T_10M = past 10 minutes
- T_24H = past 24 hours
query {
COUNT_ALL_TOTAL: getJobStat(stat: COUNT_ALL, timeFrame: TOTAL)
COUNT_CANCELLED_TOTAL: getJobStat(stat: COUNT_CANCELLED, timeFrame: TOTAL)
COUNT_DONE_TOTAL: getJobStat(stat: COUNT_DONE, timeFrame: TOTAL)
COUNT_FAILED_TOTAL: getJobStat(stat: COUNT_FAILED, timeFrame: TOTAL)
COUNT_IN_PROGRESS_TOTAL: getJobStat(
stat: COUNT_IN_PROGRESS
timeFrame: TOTAL
)
COUNT_PENDING_TOTAL: getJobStat(stat: COUNT_PENDING, timeFrame: TOTAL)
COUNT_SCHEDULED_TOTAL: getJobStat(stat: COUNT_SCHEDULED, timeFrame: TOTAL)
COUNT_TIMED_OUT_TOTAL: getJobStat(stat: COUNT_TIMED_OUT, timeFrame: TOTAL)
COUNT_ALL_T_10M: getJobStat(stat: COUNT_ALL, timeFrame: T_10M)
COUNT_CANCELLED_T_10M: getJobStat(stat: COUNT_CANCELLED, timeFrame: T_10M)
COUNT_DONE_T_10M: getJobStat(stat: COUNT_DONE, timeFrame: T_10M)
COUNT_FAILED_T_10M: getJobStat(stat: COUNT_FAILED, timeFrame: T_10M)
COUNT_IN_PROGRESS_T_10M: getJobStat(
stat: COUNT_IN_PROGRESS
timeFrame: T_10M
)
COUNT_PENDING_T_10M: getJobStat(stat: COUNT_PENDING, timeFrame: T_10M)
COUNT_SCHEDULED_T_10M: getJobStat(stat: COUNT_SCHEDULED, timeFrame: T_10M)
COUNT_TIMED_OUT_T_10M: getJobStat(stat: COUNT_TIMED_OUT, timeFrame: T_10M)
COUNT_ALL_T_1H: getJobStat(stat: COUNT_ALL, timeFrame: T_1H)
COUNT_CANCELLED_T_1H: getJobStat(stat: COUNT_CANCELLED, timeFrame: T_1H)
COUNT_DONE_T_1H: getJobStat(stat: COUNT_DONE, timeFrame: T_1H)
COUNT_FAILED_T_1H: getJobStat(stat: COUNT_FAILED, timeFrame: T_1H)
COUNT_IN_PROGRESS_T_1H: getJobStat(stat: COUNT_IN_PROGRESS, timeFrame: T_1H)
COUNT_PENDING_T_1H: getJobStat(stat: COUNT_PENDING, timeFrame: T_1H)
COUNT_SCHEDULED_T_1H: getJobStat(stat: COUNT_SCHEDULED, timeFrame: T_1H)
COUNT_TIMED_OUT_T_1H: getJobStat(stat: COUNT_TIMED_OUT, timeFrame: T_1H)
COUNT_ALL_T_24H: getJobStat(stat: COUNT_ALL, timeFrame: T_24H)
COUNT_CANCELLED_T_24H: getJobStat(stat: COUNT_CANCELLED, timeFrame: T_24H)
COUNT_DONE_T_24H: getJobStat(stat: COUNT_DONE, timeFrame: T_24H)
COUNT_FAILED_T_24H: getJobStat(stat: COUNT_FAILED, timeFrame: T_24H)
COUNT_IN_PROGRESS_T_24H: getJobStat(
stat: COUNT_IN_PROGRESS
timeFrame: T_24H
)
COUNT_PENDING_T_24H: getJobStat(stat: COUNT_PENDING, timeFrame: T_24H)
COUNT_SCHEDULED_T_24H: getJobStat(stat: COUNT_SCHEDULED, timeFrame: T_24H)
COUNT_TIMED_OUT_T_24H: getJobStat(stat: COUNT_TIMED_OUT, timeFrame: T_24H)
}
Sample Variables
getJobStat(stat: COUNT_ALL, timeFrame: TOTAL)
Response Type
Int
Sample Response
{
"data": {
"getJobStat": 105179
}
}
Get Scheduled Jobs Status Statistics
Description
Count the number of scheduled jobs based on their statuses, particularly failed, pending, or cancelled jobs. status: SCHEDULED
to each getJobStat query to isolated *scheduled* jobs.
query {
COUNT_ALL_TOTAL: getJobStat(stat: COUNT_ALL, timeFrame: TOTAL, status: SCHEDULED)
COUNT_CANCELLED_TOTAL: getJobStat(stat: COUNT_CANCELLED, timeFrame: TOTAL, status: SCHEDULED)
COUNT_DONE_TOTAL: getJobStat(stat: COUNT_DONE, timeFrame: TOTAL, status: SCHEDULED)
COUNT_FAILED_TOTAL: getJobStat(stat: COUNT_FAILED, timeFrame: TOTAL, status: SCHEDULED)
COUNT_IN_PROGRESS_TOTAL: getJobStat(
stat: COUNT_IN_PROGRESS
timeFrame: TOTAL
)
COUNT_PENDING_TOTAL: getJobStat(stat: COUNT_PENDING, timeFrame: TOTAL, status: SCHEDULED)
COUNT_SCHEDULED_TOTAL: getJobStat(stat: COUNT_SCHEDULED, timeFrame: TOTAL, status: SCHEDULED)
COUNT_TIMED_OUT_TOTAL: getJobStat(stat: COUNT_TIMED_OUT, timeFrame: TOTAL, status: SCHEDULED)
COUNT_ALL_T_10M: getJobStat(stat: COUNT_ALL, timeFrame: T_10M)
COUNT_CANCELLED_T_10M: getJobStat(stat: COUNT_CANCELLED, timeFrame: T_10M, status: SCHEDULED)
COUNT_DONE_T_10M: getJobStat(stat: COUNT_DONE, timeFrame: T_10M, status: SCHEDULED)
COUNT_FAILED_T_10M: getJobStat(stat: COUNT_FAILED, timeFrame: T_10M, status: SCHEDULED)
COUNT_IN_PROGRESS_T_10M: getJobStat(
stat: COUNT_IN_PROGRESS
timeFrame: T_10M
)
COUNT_PENDING_T_10M: getJobStat(stat: COUNT_PENDING, timeFrame: T_10M, status: SCHEDULED)
COUNT_SCHEDULED_T_10M: getJobStat(stat: COUNT_SCHEDULED, timeFrame: T_10M, status: SCHEDULED)
COUNT_TIMED_OUT_T_10M: getJobStat(stat: COUNT_TIMED_OUT, timeFrame: T_10M, status: SCHEDULED)
COUNT_ALL_T_1H: getJobStat(stat: COUNT_ALL, timeFrame: T_1H, status: SCHEDULED)
COUNT_CANCELLED_T_1H: getJobStat(stat: COUNT_CANCELLED, timeFrame: T_1H, status: SCHEDULED)
COUNT_DONE_T_1H: getJobStat(stat: COUNT_DONE, timeFrame: T_1H, status: SCHEDULED)
COUNT_FAILED_T_1H: getJobStat(stat: COUNT_FAILED, timeFrame: T_1H, status: SCHEDULED)
COUNT_IN_PROGRESS_T_1H: getJobStat(stat: COUNT_IN_PROGRESS, timeFrame: T_1H, status: SCHEDULED)
COUNT_PENDING_T_1H: getJobStat(stat: COUNT_PENDING, timeFrame: T_1H, status: SCHEDULED)
COUNT_SCHEDULED_T_1H: getJobStat(stat: COUNT_SCHEDULED, timeFrame: T_1H, status: SCHEDULED)
COUNT_TIMED_OUT_T_1H: getJobStat(stat: COUNT_TIMED_OUT, timeFrame: T_1H, status: SCHEDULED)
COUNT_ALL_T_24H: getJobStat(stat: COUNT_ALL, timeFrame: T_24H)
COUNT_CANCELLED_T_24H: getJobStat(stat: COUNT_CANCELLED, timeFrame: T_24H, status: SCHEDULED)
COUNT_DONE_T_24H: getJobStat(stat: COUNT_DONE, timeFrame: T_24H, status: SCHEDULED)
COUNT_FAILED_T_24H: getJobStat(stat: COUNT_FAILED, timeFrame: T_24H, status: SCHEDULED)
COUNT_IN_PROGRESS_T_24H: getJobStat(
stat: COUNT_IN_PROGRESS
timeFrame: T_24H
, status: SCHEDULED
)
COUNT_PENDING_T_24H: getJobStat(stat: COUNT_PENDING, timeFrame: T_24H, status: SCHEDULED)
COUNT_SCHEDULED_T_24H: getJobStat(stat: COUNT_SCHEDULED, timeFrame: T_24H, status: SCHEDULED)
COUNT_TIMED_OUT_T_24H: getJobStat(stat: COUNT_TIMED_OUT, timeFrame: T_24H, status: SCHEDULED)
}
Sample Variables
getJobStat(stat: COUNT_ALL, timeFrame: TOTAL, status: SCHEDULED)
Response Type
Int
Sample Response
{
"data": {
"getJobStat": 179
}
}
Create a Service
Description
Services are currently pipeline jobs which have an activeTimeout
and timeout
of -1
, ie. never ending jobs, schedule = "immediate"
, and with exposedPort != null
mutation CreateService(
$type: String!
$maxRetries: Int!
$yamlPath: String!
$jobName: String! = ""
$exposedPort: String = "8787"
$parameters: ParameterCreateNestedManyWithoutPipelineJobInput
$gitServer: HyperplaneVCServerCreateNestedOneWithoutPipelineJobsInput
$hyperplaneUserEmail: String!
$branchName: String
) {
createPipelineJob(
data: {
jobType: $type,
jobName: $jobName,
maxRetries: $maxRetries,
pipelineYamlPath: $yamlPath,
parameters: $parameters,
hyperplaneVCServer: $gitServer,
hyperplaneUserEmail: $hyperplaneUserEmail,
branchName: $branchName,
exposedPort: $exposedPort,
timeout: -1,
activeTimeout: -1,
schedule: "immediate"
}
) {
id
pipelineYamlPath
schedule
status
statusReason
output
startTime
completionTime
daskDashboardUrl
timeout
outputNotebooksPath
activeTimeout
maxRetries
exposedPort
jobType
parameters {
key
value
}
}
}
Sample Variables
{
"type": "basic",
"maxRetries": 2,
"jobName": "test",
"yamlPath": "examples/pipeline.yaml",
"hyperplaneUserEmail": "demo@shakudo.io",
"branchName": "demo"
}
Parameters
Field | Type | Definition |
---|---|---|
type | String! | Name of Shakudo platform Podspec/Image, default or custom. Example: "basic" |
timeout | Int! | The maximum time in seconds that the pipeline may run, starting from the moment of job submission. Set to -1 for Services. |
activeTimeout | Int | The maximum time in seconds that the pipeline may run once it is picked up. Set to -1 for Services. |
maxRetries | Int! | The maximum number of attempts to run your pipeline job before returning an error, even if timeouts are not reached. Default: 2 |
yamlPath | String | The relative path to the .yaml file used to run this pipeline job. Example: "example_notebooks/pipelines/python_hello_world_pipeline/pipeline.yaml" |
exposedPort | String | Only enabled for Shakudo Services. The port that Services use to expose the pod to other pods within the cluster. Its presence is a current indicator of whether a job is a Service. |
schedule | String | Set to immediate for Services |
parameters | ParameterCreateNestedManyWithoutPipelineJobInput | Key-value pairs that can be used within the container environment |
gitServer | HyperplaneVCServerCreateNestedOneWithoutPipelineJobsInput | Git server object, retrievable by searching git servers by name (hyperplaneVCServers) and using resulting id in the following manner: { connect: { id: $gitServerId } } |
hyperplaneUserEmail | String! | Shakudo platform user email |
branchName | String | The name of the specific git branch that contains the pipeline YAML file and pipeline scripts. If commitID is not specified, the latest commit is used. If not specified, default branch is used. |
Response Type
PipelineJob
Sample Response
{
"data": {
"createPipelineJob": {
"id": "9276a796-229f-4ede-a2cf-a7cf329dab6a",
"pipelineYamlPath": "examples/pipeline.yaml",
"schedule": "immediate",
"status": "pending",
"statusReason": null,
"output": null,
"startTime": "2023-07-06T14:51:35.506Z",
"completionTime": null,
"daskDashboardUrl": null,
"timeout": -1,
"outputNotebooksPath": null,
"activeTimeout": -1,
"maxRetries": 2,
"exposedPort": "8787",
"jobType": "basic",
"parameters": []
}
}
}
Create a Service using PodSpec JSON (getUserServicePodSpec
)
Description
Create a Service using a PodSpec JSON object that is customizable.
Retrieve UserServicePodSpec
query GetUserServicePodSpec(
$exposedPort: String
$parameters: ParametersInput
$gitServerName: String
$noGitInit: Boolean
$imageUrl: String
$userEmail: String!
$noHyperplaneCommands: Boolean
$commitId: String
$branchName: String!
$pipelineYamlPath: String!
$jobType: String!
) {
getUserServicePodSpec(
exposedPort: $exposedPort
parameters: $parameters
gitServerName: $gitServerName
noGitInit: $noGitInit
imageUrl: $imageUrl
userEmail: $userEmail
noHyperplaneCommands: $noHyperplaneCommands
commitId: $commitId
branchName: $branchName
pipelineYamlPath: $pipelineYamlPath
jobType: $jobType
)
}
Sample Variables
{
"userEmail": "demo@shakudo.io",
"branchName": "demo",
"pipelineYamlPath": "examples/pipeline.yaml",
"jobType": "basic"
}
Parameters
Field | Type | Description |
---|---|---|
exposedPort | String | Only enabled for Shakudo Services. The port that Services use to expose the pod to other pods within the cluster. Its presence is a current indicator of whether a job is a Service. |
parameters | ParametersInput | Key-value pairs that can be used within the container environment. |
gitServerName | String | The name of the Git server used for the pipeline job. |
noGitInit | Boolean | Specifies whether the Git server initialization is skipped for the pipeline job. |
imageUrl | String | The URL of the image used for the pipeline job. |
userEmail | String! | Shakudo platform user email for the user who created the session. |
noHyperplaneCommands | Boolean | Specifies whether default Shakudo platform commands are used for the pipeline job creation. |
commitId | String | The commit hash for the specific commit used to pull the latest files for the pipeline. |
branchName | String! | The name of the specific git branch that contains the pipeline YAML file and pipeline scripts. |
pipelineYamlPath | String! | The relative path to the .yaml file used to run this pipeline job. |
jobType | String! | Name of the Shakudo platform Podspec/Image used for the pipeline job. eg. "basic" |
Create Service using UserServicePodSpec result
mutation CreateService(
$podSpec: JSON!
$jobName: String! = ""
$userEmail: String!
$exposedPort: String! = "8787"
$pipelineYamlPath: String!
) {
createPipelineJob (data: {
jobName: $jobName,
podSpec: $podSpec,
hyperplaneUserEmail: $userEmail,
pipelineYamlPath: $pipelineYamlPath,
exposedPort: $exposedPort,
timeout: -1,
activeTimeout: -1,
schedule: "immediate"
}
) {
id
pipelineYamlPath
schedule
status
statusReason
output
startTime
completionTime
daskDashboardUrl
timeout
outputNotebooksPath
activeTimeout
maxRetries
exposedPort
jobType
parameters {
key
value
}
}
}
Sample Variables
{
"userEmail": "demo@shakudo.io",
"exposedPort": "8787",
"branchName": "main",
"pipelineYamlPath": "examples/pipeline.yaml",
"jobType": "basic",
"jobName": "test-service",
"podSpec": <getUserServicePodSpec field result>
}
Parameters
Field | Type | Description |
---|---|---|
podSpec | JSON! | The JSON object representing the PodSpec configuration for the pipeline job. |
jobName | String! | The name of the pipeline job. |
userEmail | String! | Shakudo platform user email for the user who created the session. |
exposedPort | String! | Only enabled for Shakudo Services. The port that Services use to expose the pod to other pods within the cluster. Its presence is a current indicator of whether a job is a Service. Default value: "8787". |
pipelineYamlPath | String! | The relative path to the .yaml file used to run this pipeline job. |
Response Type
PipelineJob
Sample Response
{
"data": {
"createPipelineJob": {
"id": "9f8f0524-6d67-4996-bd45-8a2434d97c1f",
"pipelineYamlPath": "examples/pipeline.yaml",
"schedule": "immediate",
"status": "pending",
"statusReason": null,
"output": null,
"startTime": "2023-06-30T16:03:42.668Z",
"completionTime": null,
"daskDashboardUrl": null,
"timeout": -1,
"outputNotebooksPath": null,
"activeTimeout": -1,
"maxRetries": 2,
"exposedPort": "8787",
"jobType": "basic",
"parameters": []
}
}
}
Get a List of Services
Description
Get a list of services — services are pipeline jobs which have an activeTimeout
and timeout
of -1
, ie. never ending jobs, schedule = "immediate"
, and with exposedPort != null
query services($offset: Int, $limit: Int!, $status: String!) {
pipelineJobs(orderBy: [{pinned: desc},{ startTime: desc}], take: $limit, skip: $offset, where: {
AND: [
{activeTimeout: {equals: -1}},
{timeout: {equals: -1}},
{timeout: {equals: "immediate"}},
{status: {equals: $status}}
]
}) {
id
exposedPort
pinned
pipelineYamlPath
schedule
status
statusReason
startTime
completionTime
daskDashboardUrl
timeout
output
outputNotebooksPath
activeTimeout
duration
jobType
schedule
estimatedCost
owner
maxRetries
}
}
Sample Variables
{
"limit": 10,
"status": "in progress"
}
Parameters
Field | Type | Description |
---|---|---|
offset | Int | The number of records to skip from the original result. |
limit | Int! (required) | The number of records to retrieve. |
status | String! (required) | The status of the Kubernetes job that runs the pipeline job. |
Response Type
PipelineJob
Sample Response
{
"data": {
"pipelineJobs": [
{
"id": "9276a796-229f-4ede-a2cf-a7cf329dab6a",
"exposedPort": "8787",
"pinned": false,
"pipelineYamlPath": "service.yaml",
"schedule": "immediate",
"status": "in progress",
"statusReason": null,
"startTime": "2023-03-23T02:34:51.850Z",
"completionTime": null,
"daskDashboardUrl": "client.hyperplane.dev/dashboard/",
"timeout": -1,
"output": null,
"outputNotebooksPath": null,
"activeTimeout": -1,
"duration": null,
"jobType": "basic",
"estimatedCost": null,
"owner": "demo",
"maxRetries": 0
},
{
"id": "abeee208-c717-42d9-81f9-9448cdf1473e",
"exposedPort": "8787",
"pinned": false,
"pipelineYamlPath": "service2.yaml",
"schedule": "immediate",
"status": "in progress",
"statusReason": null,
"startTime": "2022-11-18T19:21:23.504Z",
"completionTime": null,
"daskDashboardUrl": "client.hyperplane.dev/dashboard2/",
"timeout": -1,
"output": null,
"outputNotebooksPath": "gs://outputNotebookPath",
"activeTimeout": -1,
"duration": null,
"jobType": "basic",
"estimatedCost": null,
"owner": "demo",
"maxRetries": 0
}
]
}
}
Cancel all Scheduled Jobs
Cancel all Scheduled Jobs
mutation cancelScheduledJobs {
updateManyPipelineJob(
where: { status: { equals: "scheduled" } }
data: { status: { set: "cancelled" } }
) {
count
}
}
Cancel all Scheduled Jobs for a Specific User
Users can also add hyperplaneUserEmail: { equals: $userEmail }
to cancel all scheduled jobs created by a particular user.
mutation cancelScheduledJobsForUser($userEmail: String!) {
updateManyPipelineJob(
where: { status: { equals: "scheduled" }, hyperplaneUserEmail: { equals: $userEmail } }
data: { status: { set: "cancelled" } }
) {
count
}
}
Sample Variables
{
"userEmail": "demo@shakudo.io"
}
Parameters
Field | Type | Description |
---|---|---|
userEmail | String! (required) | The email corresponding to the user who created all the scheduled jobs to be cancelled. |
Response Type
AffectedRowsOutput
Sample Response
{
"data": {
"updateManyPipelineJob": {
"count": 2
}
}
}
Get Job Parameters
Description
Get the list of parameters for a pipeline job
query jobParameters($id: String!) {
pipelineJobs(where: {id: {equals: $id}}) {
parameters {
key
value
id
pipelineJobId
}
}
}
Parameters
Field | Type | Description |
---|---|---|
id | String! (required) | The ID of the job for which the parameters are being listed. |
Sample Variables
{
"id": "65e3a289-1371-4009-9fb3-c03bfbcbebd8"
}
Response Object Fields
Array of Parameters
Sample Response
{
"data": {
"pipelineJobs": [
{
"parameters": [
{
"key": "key",
"value": "value",
"id": "abeee208-c717-42d9-81f9-9448cdf1473e",
"pipelineJobId": "d1e5cd20-05d3-4517-b009-ec2e8e4f171d"
}
]
}
]
}
}
Delete a Job Parameter
Description
Delete a parameter for a pipeline job
# Retrieve parameterId
query GetPipelineJobParameters($jobId: String!){
pipelineJob(where:{id: $jobId}){
jobName
parameters{
id
key
value
}
}
}
mutation DeletePipelineJobParameter($parameterId: String!) {
deleteParameter(where: {
id: $parameterId
}) {
id
key
value
}
}
Parameters
Field | Type | Description |
---|---|---|
jobId | String! (required) | The ID of the job from which the parameter is being deleted. |
parameterId | String! (required) | The ID of the parameter being deleted. Retrieved from GetPipelineJobParameters |
Sample Variables
{
"jobId": "65e3a289-1371-4009-9fb3-c03bfbcbebd8",
"parameterId": "9f8f0524-6d67-4996-bd45-8a2434d97c1f"
}
Response Type
Parameter
Sample Response
{
"data": {
"deleteParameter": {
"id": "9f8f0524-6d67-4996-bd45-8a2434d97c1f",
"key": "foo",
"value": "bar"
}
}
}
Update a Job Parameter
Description
Updates the key and/or value of a parameter.
mutation ($parameterId: String!, $keyValue: String, $valueValue: String) {
updateParameter(where: {id: $parameterId}, data: {
key: {set: $keyValue}
value: {set: $valueValue}
}) {
id
key
value
}
}
Parameters
Field | Type | Description |
---|---|---|
parameterId | String! (required) | ID of the parameter being updated. |
keyValue | String | New value for the "key" field of the parameter. |
valueValue | String | New value for the "value" field of the parameter. |
Sample Variables
{
"parameterId": "65e3a289-1371-4009-9fb3-c03bfbcbebd8",
"keyValue": "newKey",
"valueValue": "newValue"
}
Response Type
Parameter
Sample Response
{
"data": {
"updateParameter": {
"id": "65e3a289-1371-4009-9fb3-c03bfbcbebd8"
"keyValue": "newKey",
"valueValue": "newValue"
}
}
}
Types
HyperHubSession
Metadata for Sessions
billingProject: BillingProject
billingProjectId: String
collaborative: Boolean
completionTime: DateTime
currentPodEvents: String
department: String
duration: Int
estimatedCost: Float
gpuRequest: String
group: String
hyperplaneUser: HyperplaneUser!
hyperplaneUserEmail: String!
hyperplanepodspecName: String
id: String
imageHash: String
imageType: String
jLabUrl: String
notebookURI: String
owner: String
podEventsLog: String
podSpec: String
podSpecTemplate: HyperplanePodSpec
premptableNode: Boolean
resourceCPUlimit: String
resourceCPUrequest: String
resourceRAMlimit: String
resourceRAMrequest: String
runId: String
sshCommand: String
startTime: DateTime
status: String
statusReason: String
timeout: Int
useHyperplanepodspec: Boolean
userPvc: UserPvc
userPvcName: String
workerPodName: String
Field | Type | Definition |
---|---|---|
billingProject | BillingProject | Billing project that user would like Session costs to contribute. Can either provide identifiers to connect to an existing billing project or can provide values to create a new billing project. |
billingProjectId | String | Billing Project ID. |
collaborative | Boolean | Toggle collaborative mode |
completionTime | DateTime | Completion time of the pipeline job |
currentPodEvents | String | Displays log of states of pod (current events in pod) |
department | String | Disabled, not used |
duration | Int | Duration of the pipeline job |
estimatedCost | Float | Disabled, plan on using it for tracking estimated cost of the job |
gpuRequest | String | Number of gpus requested |
group | String | Not used, leave as an empty string |
hyperplaneUser | HyperplaneUser! | Shakudo platform user account details. Can either provide identifiers to connect to an existing account or can provide values to create a new user account. |
hyperplaneUserEmail | String! | Shakudo platform user account email |
hyperplanepodspecName | String | Disabled |
id | String | HyperHubSession object identifier |
imageHash | String | URL of custom image, same as imageUrl |
imageType | String | Name of Shakudo platform Podspec/Image |
jLabUrl | String | URL for JupyterLab version of the Session environment |
notebookURI | String | Base url to access jupyter notebook and vscode notebooks |
owner | String | Username of the user account that owns the session, currently the user that created the session |
podEventsLog | String | Session pod event status log details |
podSpec | Json? | Shakudo platform PodSpec environment config object as JSON |
podSpecTemplate | HyperplanePodSpec | Not used, similar use to jobType |
premptableNode | Boolean | Disabled |
resourceCPUlimit | String | Limit to the number of CPUs to be allocated |
resourceCPUrequest | String | Number of CPUs requested to be allocated |
resourceRAMlimit | String | Memory allocation limit |
resourceRAMrequest | String | Memory allocation amount request |
runId | String | |
sshCommand | String | |
startTime | DateTime | Session environment creation time |
status | String | The status of the Kubernetes job that runs the pipeline job |
statusReason | String | Kubernetes job status details |
timeout | Int | The maximum time in seconds that the pipeline may run, starting from the moment of job submission. Default: -1, i.e., never timeout; 86400 on the dashboard |
useHyperplanepodspec | Boolean | |
userPvc | UserPvc | Shakudo session persistent volume (drive) details. Can either provide identifiers to connect to an existing drive or can provide values to create a new drive. Default: not present, which corresponds with default drive claim-{user-email}. |
userPvcName | String | Persistent volume name as found in Kubernetes. Typically includes the drive name found on the dashboard. Default: an empty string, which corresponds with the default drive claim-{user-email}. |
workerPodName | String |
Example:
{
"billingProject": null,
"collaborative": false,
"completionTime": "2023-05-17T21:26:02.310Z",
"currentPodEvents": null,
"department": null,
"duration": 901,
"estimatedCost": null,
"gpuRequest": null,
"group": "",
"hyperplaneUser": {
"id": "3661bd41-6ca9-4b20-a74b-c46ce6ff6951",
"email": "demo@shakudo.io"
},
"hyperplaneUserEmail": "demo@shakudo.io",
"hyperplanepodspecName": null,
"id": "65e3a289-1371-4009-9fb3-c03bfbcbebd8",
"imageHash": "gcr.io/imageHash",
"imageType": "test-custom-image",
"jLabUrl": "",
"notebookURI": "",
"owner": "demo",
"podEventsLog": "Stopping container hyperhub-user",
"podSpec": null,
"podSpecTemplate": null,
"premptableNode": false,
"resourceCPUlimit": null,
"resourceCPUrequest": null,
"resourceRAMlimit": null,
"resourceRAMrequest": null,
"runId": null,
"sshCommand": null,
"startTime": "2023-05-17T21:11:00.948Z",
"status": "cancelled",
"statusReason": "Ready--true",
"timeout": 900,
"useHyperplanepodspec": false,
"userPvc": null,
"userPvcName": "",
"workerPodName": null
}
PipelineJob
Shakudo platform job config docs
type PipelineJob {
TritonClient: TritonClient
activeTimeout: Int!
billingProject: BillingProject
billingProjectId: String
branchName: String
branchNameOrCommit: BranchSelection
childJobs(cursor: PipelineJobWhereUniqueInput, distinct: [PipelineJobScalarFieldEnum!], orderBy: [PipelineJobOrderByInput!], skip: Int, take: Int, where: PipelineJobWhereInput): [PipelineJob!]!
cloudRunner: String
commitId: String
completionTime: DateTime
customCommand: String
customTrigger: String
dashboardPrefix: String
daskDashboardUrl: String
debuggable: Boolean!
department: String
displayedOwner: String!
duration: Int
estimatedCost: Float
exposedPort: String
grafanaLink: String!
group: String
hyperplaneUser: HyperplaneUser
hyperplaneUserEmail: String
hyperplaneUserId: String
hyperplaneVCServer: HyperplaneVCServer
hyperplaneVCServerId: String
hyperplanepodspecName: String
icon: String
id: String!
imageHash: String
jobCommand: String
jobName: String
jobType: String!
mappedUrl: String
maxHpaRange: Int!
maxRetries: Int!
maxRetriesPerStep: Int!
minReplicas: Int!
noGitInit: Boolean
noHyperplaneCommands: Boolean
noVSRewrite: Boolean
output: String
outputNotebooksPath: String
owner: String
parameters(cursor: ParameterWhereUniqueInput, distinct: [ParameterScalarFieldEnum!], orderBy: [ParameterOrderByInput!], skip: Int, take: Int, where: ParameterWhereInput): [Parameter!]!
parentJob: PipelineJob
parentJobId: String
pinned: Boolean!
pipelineYamlPath: String
podSpecTemplate: HyperplanePodSpec
podSpecTemplateId: String
preemptible: Boolean!
premptableNode: Boolean!
priorityClass: String!
runId: String
schedule: String!
sendNotification: Boolean
slackChannelName: String
sshCommand: String
startTime: DateTime!
status: String!
statusReason: String
steps(cursor: PipelineStepWhereUniqueInput, distinct: [PipelineStepScalarFieldEnum!], orderBy: [PipelineStepOrderByInput!], skip: Int, take: Int, where: PipelineStepWhereInput): [PipelineStep!]!
timeout: Int!
timeoutPerStep: Int
timezone: String!
useHyperplanepodspec: Boolean
workerPodName: String
}
Field | Type | Definition |
---|---|---|
TritonClient | TritonClient | If TritonClient is non-null, then this PipelineJob is a Triton Job. TritonClient stores Triton client instance object metadata. |
activeTimeout | Int! | The maximum time in seconds that the pipeline may run once it is picked up. Default: 86400, use -1 to never timeout. |
billingProject | BillingProject | Billing project that user would like Job costs to contribute. Can either provide identifiers to connect to an existing billing project or can provide values to create a new billing project. |
billingProjectId | String | Billing Project ID. |
branchName | String | Name of git branch. |
branchNameOrCommit | BranchSelection | Enum that states whether the image is based on branch or commit. |
childJobs | [PipelineJob!]! | Jobs that spawned based on this job. |
cloudRunner | String | Currently disabled, but will be used to determine which cloud the job will run on. Will be added as part of the multicloud feature. |
commitId | String | Commit hash for the commit used to pull the latest files for the pipeline. |
completionTime | DateTime | Completion time of the pipeline job. |
customCommand | String | Not used |
customTrigger | String | Currently disabled on cluster, but will be used on KEDA jobs. |
dashboardPrefix | String | Which URL subpath you want a Service to run in with respect to your Shakudo service domain. e.g., http://shakudoservice.io/modelV1. |
daskDashboardUrl | String | URL for dask dashboard |
debuggable | Boolean! | Whether the debuggable service is enabled. |
department | String | Not used |
displayedOwner | String! | Username of the user account that owns the job, currently the user that created the job. Username is based on email. |
duration | Int | Duration of the pipeline job. |
estimatedCost | Float | Not used |
exposedPort | String | Only enabled for Shakudo Services. The port that Services use to expose the pod to other pods within the cluster. Its presence is a current indicator of whether a job is a Service. |
grafanaLink | String! | Link to Grafana logs. |
group | String | Disabled, not used. Leave as empty string |
hyperplaneUser | HyperplaneUser | Shakudo platform user account details. Can either provide identifiers to connect to an existing account or can provide values to create a new user account. |
hyperplaneUserEmail | String | Shakudo platform user email. |
hyperplaneUserId | String | Shakudo platform user ID. |
hyperplaneVCServer | HyperplaneVCServer | Shakudo Platform Git Server object. |
hyperplaneVCServerId | String | Git server object ID. |
hyperplanepodspecName | String | |
icon | String | |
id | String! | |
imageHash | String | imageUrl |
jobCommand | String | |
jobName | String | Plain name of job viewable from the dashboard, is not necessarily unique. |
jobType | String! | Name of Shakudo platform Podspec/Image. |
mappedUrl | String | |
maxHpaRange | Int! | Maximum number of replicas for HPA (horizontal pod autoscaling). |
maxRetries | Int! | Maximum number of job retries. |
maxRetriesPerStep | Int! | Maximum number of retries per job step. |
minReplicas | Int! | Minimum number of K8s ReplicaSets. |
noGitInit | Boolean | False if git server is to be set up using Shakudo platform workflow. Default: false. |
noHyperplaneCommands | Boolean | False if using default Shakudo platform commands on job creation. Required to use Shakudo platform jobs through the pipeline yaml, but not required if the image has its own setup. Default: false. |
noVSRewrite | Boolean | Only supported for Services. If enabled, the external prefix/subpath on the Shakudo domain directly corresponds to the same subpath within the Service. |
output | String | |
outputNotebooksPath | String | |
owner | String | Typically mirrors displayedOwner , refer primarily to displayedOwner . |
parameters | [Parameter!]! | List of key-value parameters that are injected into the Job environment and can be used as environment variables. |
parentJob | PipelineJob | The info of that parent job if the current job spawned from another job. |
parentJobId | String | Parent job ID. |
pinned | Boolean! | Whether the job is pinned on the dashboard. |
pipelineYamlPath | String | Relative path to .yaml file for running pipeline |
podSpecTemplate | HyperplanePodSpec | Not used, similar use to jobType. |
podSpecTemplateId | String | ID for the corresponding podSpecTemplate. |
preemptible | Boolean! | Determines whether the job can be preempted, i.e., timed out. This means that a Preemptible VM will be used. |
premptableNode | Boolean! | Not used |
priorityClass | String! | K8s pod priority classification. |
runId | String | |
schedule | String! | Either "immediate" for an immediate job or a cron schedule expression for a scheduled job at the specified interval. |
sendNotification | Boolean | |
slackChannelName | String | |
sshCommand | String | |
startTime | DateTime! | Job start time. |
status | String! | The status of the Kubernetes job that runs |
| statusReason
| String | Kubernetes job status detail |
| steps | (cursor: PipelineStepWhereUniqueInput, distinct: [PipelineStepScalarFieldEnum!], orderBy: [PipelineStepOrderByInput!], skip: Int, take: Int, where: PipelineStepWhereInput): [PipelineStep!]! | pipeline step objects that correspond with individual script steps |
| timeoutPerStep | Int | |
| timezone | String | eg. UTC
|
| useHyperplanepodspec | Boolean | Not used |
| workerPodName | String | Not used |
Example:
{
"TritonClient": null,
"activeTimeout": 82400,
"billingProject": null,
"billingProjectId": null,
"branchName": null,
"branchNameOrCommit": null,
"childJobs": [],
"cloudRunner": "",
"commitId": null,
"completionTime": null,
"customCommand": null,
"customTrigger": null,
"dashboardPrefix": null,
"daskDashboardUrl": null,
"debuggable": false,
"department": null,
"displayedOwner": "",
"duration": null,
"estimatedCost": null,
"exposedPort": null,
"grafanaLink": "https://grafana.sample.hyperplane.dev/explore",
"group": null,
"hyperplaneUser": null,
"hyperplaneUserEmail": null,
"hyperplaneUserId": null,
"hyperplaneVCServer": null,
"hyperplaneVCServerId": null,
"hyperplanepodspecName": null,
"icon": null,
"id": "b50e8ea9-1627-4a5d-b7c7-ebad6c801d0a",
"imageHash": "",
"jobCommand": null,
"jobName": null,
"jobType": "basic",
"mappedUrl": null,
"maxHpaRange": 1,
"maxRetries": 2,
"maxRetriesPerStep": 0,
"minReplicas": 1,
"noGitInit": false,
"noHyperplaneCommands": false,
"noVSRewrite": false,
"output": null,
"outputNotebooksPath": null,
"owner": null,
"parameters": [],
"parentJob": null,
"parentJobId": null,
"pinned": false,
"pipelineYamlPath": "example_pipeline.yaml",
"podSpecTemplate": null,
"podSpecTemplateId": null,
"preemptible": true,
"premptableNode": true,
"priorityClass": "shakudo-priority-class",
"runId": null,
"schedule": "immediate",
"sendNotification": null,
"slackChannelName": null,
"sshCommand": null,
"startTime": "2023-06-29T16:38:10.829Z",
"status": "pending",
"statusReason": null,
"steps": [],
"timeout": 82400,
"timeoutPerStep": null,
"timezone": "UTC",
"useHyperplanepodspec": false,
"workerPodName": null
}
Parameter
Key-value pairs that are injected into Jobs and Session environments
type Parameter {
PipelineJob: PipelineJob
id: String!
key: String!
pipelineJobId: String
value: String
}
Field | Type | Definition |
---|---|---|
PipelineJob | PipelineJob | Pipeline job that has this parameter |
id | String! | The ID of the parameter |
key | String! | The key of the parameter |
pipelineJobId | String | The ID of the pipeline job that has this parameter |
value | String | The value of the parameter |
{
"key": "key",
"value": "value",
"id": "b50e8ea9-1627-4a5d-b7c7-ebad6c801d0a",
"pipelineJobId": "833bd3d1-bb63-4289-99d8-25c2856f2fba"
}
HyperplaneVCServer
Git servers tied to remote git repositories
type HyperplaneVCServer {
defaultBranch: String!
id: String!
name: String!
pipelineJobs(cursor: PipelineJobWhereUniqueInput, distinct: [PipelineJobScalarFieldEnum!], orderBy: [PipelineJobOrderByInput!], skip: Int, take: Int, where: PipelineJobWhereInput): [PipelineJob!]!
serviceUrl: String
status: HyperplaneVCServerStatus!
url: String!
}
Field | Type | Definition |
---|---|---|
defaultBranch | String! | The default git branch of the git server |
id | String! | The ID of the git server (HyperplaneVCServer) object |
name | String! | The name of the git server |
pipelineJobs | [PipelineJob!]! | The pipeline jobs that are connected to this git server. Mirrors pipelineJobs query |
serviceUrl | String | The service URL (DNS record) for in-cluster connection access |
status | HyperplaneVCServerStatus! | The status of the git server resource |
url | String! | The remote repository SSH URL |
HyperplanePodSpec
Environment configs for defining Shakudo resources; surrounds image, hardware, storage, kubernetes settings, etc.
type HyperplanePodSpec {
description: String!
displayName: String!
extraEnvars: String
extraTolerations: String
extraVolumeMounts: String
extraVolumes: String
gpuResourceType: String
hyperhubSessions(cursor: HyperHubSessionWhereUniqueInput, distinct: [HyperHubSessionScalarFieldEnum!], orderBy: [HyperHubSessionOrderByInput!], skip: Int, take: Int, where: HyperHubSessionWhereInput): [HyperHubSession!]!
hyperplaneImage: HyperplaneImage
hyperplaneImageId: String
hyperplaneUser: HyperplaneUser
hyperplaneUserEmail: String
hyperplaneUserId: String
icon: String!
id: String!
imagePullPolicy: String
imageUrl: String
nodeSelector: String
nodeSelectorKey: String
nodeSelectorValue: String
pipelineJobs(cursor: PipelineJobWhereUniqueInput, distinct: [PipelineJobScalarFieldEnum!], orderBy: [PipelineJobOrderByInput!], skip: Int, take: Int, where: PipelineJobWhereInput): [PipelineJob!]!
podSpec: String
podspecName: String!
pv: String
pvc: String
resourceCPUlimit: String
resourceCPUrequest: String
resourceGPUrequest: String
resourceRAMlimit: String
resourceRAMrequest: String
show: Boolean!
status: String!
statusReason: String
workingDir: String
}
Field | Type | Definition |
---|---|---|
description | String! | PodSpec description |
displayName | String! | General purpose display name for PodSpec that appears as a title on dashboard |
extraEnvars | String | List of key-value parameters that are injected into any Shakudo resource environment |
extraTolerations | String | Additional pod toleration rules |
extraVolumeMounts | String | Additional storage mounting rules. These are relative to the provided Volumes |
extraVolumes | String | Additional persistent storage spaces |
gpuResourceType | String | Type of GPU resource |
hyperhubSessions | [HyperHubSession!]! | Sessions that currently use this PodSpec. Mirrors hyperhubSessions query |
hyperplaneImage | HyperplaneImage | Hyperplane image associated with the PodSpec |
hyperplaneUser | HyperplaneUser | Shakudo platform user account details |
icon | String! | Not used |
id | String! | ID of the PodSpec |
imagePullPolicy | String | |
imageUrl | String | URL of image |
nodeSelector | String | |
nodeSelectorKey | String | |
nodeSelectorValue | String | |
pipelineJobs | [PipelineJob!]! | The pipeline jobs that are connected to this git server. Mirrors pipelineJobs query |
podSpec | String | |
podspecName | String! | |
pv | String | Persistent volume associated with the PodSpec |
pvc | String | Persistent volume claim associated with the PodSpec |
resourceCPUlimit | String | CPU limit for resource allocation |
resourceCPUrequest | String | CPU request for resource allocation |
resourceGPUrequest | String | GPU request for resource allocation |
resourceRAMlimit | String | RAM limit for resource allocation |
resourceRAMrequest | String | RAM request for resource allocation |
show | Boolean! | |
status | String! | Status of the PodSpec |
statusReason | String | Reason for the PodSpec status |
workingDir | String | Working directory for the PodSpec |
Operations
hyperHubSessions
Signature
hyperHubSessions(
cursor: HyperHubSessionWhereUniqueInput,
distinct: [HyperHubSessionScalarFieldEnum!],
orderBy: [HyperHubSessionOrderByInput!],
skip: Int,
take: Int,
where: HyperHubSessionWhereInput
): [HyperHubSession!]!
Function Description
Retrieves a list of Shakudo platform session metadata, allowing for pagination (cursor and offset-based) and filtering.
Input Object Fields
Field | Type | Definition |
---|---|---|
cursor | HyperHubSessionWhereUniqueInput | Starting session value to paginate from using cursor-based pagination, i.e., the current result starts from this session record. |
distinct | [HyperHubSessionScalarFieldEnum!] | List of fields where their values will remain distinct per record. |
orderBy | [HyperHubSessionOrderByInput!] | List of fields that will be used to order the results, ordering precedence determined by the location in the list. |
skip | Int | The number of records to skip from the original result. |
take | Int | The maximum number of records to show in the result. |
where | HyperHubSessionWhereInput | Conditional values to filter for a specific HyperHubSession object. |
Request Example
query HyperhubSessions($limit: Int!, $email: String, $status: String) {
hyperHubSessions(orderBy:{startTime: desc}, take: $limit, where: {
hyperplaneUserEmail: {equals: $email},
status: {equals: $status},
}) {
id
hyperplaneUserEmail
status
imageType
jLabUrl
notebookURI
estimatedCost
department
resourceCPUlimit
resourceRAMlimit
resourceCPUrequest
resourceRAMrequest
gpuRequest
startTime
completionTime
}
countHyperHubSessions
}
variables
{
"limit": 10,
"email": "demo@shakudo.io",
"status": "in progress"
}
Response Object Fields
HyperHubSession
Response Example
{
"data": {
"hyperHubSessions": [
{
"id": "78ba5679-1fd0-475a-88b0-d1877413747f",
"hyperplaneUserEmail": "demo@shakudo.io",
"status": "in progress",
"imageType": "basic",
"jLabUrl": "client/hyperplane.dev/jlabUrl/",
"notebookURI": "ssh demo-pvc-entry@demo.dev",
"estimatedCost": null,
"department": null,
"resourceCPUlimit": null,
"resourceRAMlimit": null,
"resourceCPUrequest": null,
"resourceRAMrequest": null,
"gpuRequest": null,
"startTime": "2023-06-28T15:32:40.090Z",
"completionTime": null
}
],
"countHyperHubSessions": 3006
}
}
createHyperHubSession
Signature
createHyperHubSession(
data: HyperHubSessionCreateInput!
): HyperHubSession!
Function Description
Creates a Shakudo Session environment, a data development environment comes with pre-configured environments, typically accessible in the form of a jupyter notebook.
Input Object Fields
Field | Type | Description |
---|---|---|
data | HyperHubSessionCreateInput | HyperHubSession object that contains field values used to create a Session. Check HyperHubSessionCreateInput for specific fields. |
Request Example
mutation CreateHyperHubSession($input: HyperHubSessionCreateInput!) {
createHyperHubSession(data: $input) {
id
hyperplaneUserEmail
status
imageType
jLabUrl
estimatedCost
department
resourceCPUlimit
resourceRAMlimit
resourceCPUrequest
resourceRAMrequest
gpuRequest
startTime
completionTime
timeout
group
billingProjectId
}
}
Sample Variables
{
"input": {
"collaborative": false,
"imageType": "basic",
"imageUrl": "",
"timeout": 900,
"userPvcName": "",
"group": "",
"hyperplaneUserId": "2a9980d9-f43c-4369-b71e-70d12d369e47",
"billingProjectId": {
"connect": {
"id": "284f0a8e-52d9-4a57-be42-f461fc4315c7"
}
},
"hyperplaneUserEmail": "demo@shakudo.io"
}
}
Response Object Fields
GraphQL Docs
Response Example
{
"data": {
"createHyperHubSession": {
"id": "f48e3b18-bced-4a8c-85b9-b3c2fdd1a06a",
"hyperplaneUserEmail": "demo@shakudo.io",
"status": "pending",
"imageType": "basic",
"jLabUrl": null,
"estimatedCost": null,
"department": null,
"resourceCPUlimit": null,
"resourceRAMlimit": null,
"resourceCPUrequest": null,
"resourceRAMrequest": null,
"gpuRequest": null,
"startTime": "2023-06-27T19:27:43.987Z",
"completionTime": null,
"timeout": 900,
"group": "",
"billingProjectId": "f6a3911d-e048-49f0-96d8-1abd930b66db"
}
}
}
updateHyperHubSession
Signature
updateHyperHubSession(
data: HyperHubSessionUpdateInput!
where: HyperHubSessionWhereUniqueInput!
): HyperHubSession
Function Description
Updates the fields for a specific Session based on the provided data and conditions provided.
Input Object Fields
Field | Type | Definition |
---|---|---|
data | HyperHubSessionUpdateInput! | HyperHubSession partial object that contains field values used to update, specified in the format [field]: {[action]: [value]} |
where | HyperHubSessionWhereUniqueInput! | Conditional values to filter for a specific HyperHubSession object |
Request Example: Cancelling a Session
mutation UpdateHyperHubSession($id: String!) {
updateHyperHubSession(where: {id: $id}, data: {
status: {set: "cancelled"}
}) {
id
status
}
}
variables
{
"id": "7b728979-71b7-426c-9847-6fe3e29a6438"
}
Response Object Fields
HyperHubSession
Response Example
{
"data": {
"updateHyperHubSession": {
"id": "7b728979-71b7-426c-9847-6fe3e29a6438",
"status": "cancelled"
}
}
}
getJobStat
Signature
getJobStat(
stat: StatType!,
status: StatusType,
timeFrame: TimeFrame!
): Int
Function Description
Retrieves job count statistics based on the conditions provided.
Input Object Fields
Field | Type | Description |
---|---|---|
stat | StatType! | Statistic type options. Possible values: COUNT_ALL, COUNT_CANCELLED, COUNT_DONE, COUNT_FAILED, COUNT_IN_PROGRESS, COUNT_PENDING, COUNT_SCHEDULED, COUNT_TIMED_OUT, COUNT_TRIGGERED. |
status | StatusType | Status type options. Possible values: ALL, SCHEDULED, TRIGGERED. |
timeFrame | TimeFrame! | Timeframe options. Possible values: TOTAL, T_1H, T_10M, T_24H. |
Request Example
query {
getJobStat(stat: COUNT_ALL, timeFrame: TOTAL)
}
Response Object Fields
Int
Response Example
{
"data": {
"getJobStat": 105179
}
}
createPipelineJob
Signature
createPipelineJob(
data: PipelineJobCreateInput!
): PipelineJob!
Function Description
Creates a Shakudo platform job, which allows users to run task scripts using custom configurations, either immediately as an “Immediate job”, at scheduled intervals as a “Scheduled Job”, or indefinitely as a “Service”.
Immediate jobs: schedule = “immediate”
Scheduled jobs: schedule ≠ “immediate”, schedule is set to cron schedule expression, eg. 0 0 * * *
for a job running every minute
Service: timeout
and activeTimeout
set to -1
and exposedPort != null
, particularly set to a valid port
Input Object Fields
data: PipelineJob object that contains field values used to create a PipelineJob. Check PipelineJobCreateInput for specific fields.
Example fields:
Field | Type | Definition |
---|---|---|
type | String! | Name of Shakudo platform Podspec/Image, default or custom. Example: "basic" |
timeout | Int! | The maximum time in seconds that the pipeline may run, starting from the moment of job submission. Default: -1 (never timeout). Example: 86400 |
activeTimeout | Int | The maximum time in seconds that the pipeline may run once it is picked up. Default: -1 (never timeout). Example: 86400 |
maxRetries | Int! | The maximum number of attempts to run your pipeline job before returning an error, even if timeouts are not reached. Default: 2 |
yamlPath | String | The relative path to the .yaml file used to run this pipeline job. Example: "example_notebooks/pipelines/python_hello_world_pipeline/pipeline.yaml" |
exposedPort | String | Only enabled for Shakudo Services. The port that Services use to expose the pod to other pods within the cluster. Its presence is a current indicator of whether a job is a Service. |
schedule | String | Either "immediate" for an immediate job or a cron schedule expression for a scheduled job at the specified interval. |
parameters | ParameterCreateNestedManyWithoutPipelineJobInput | Key-value pairs that can be used within the container environment |
gitServer | HyperplaneVCServerCreateNestedOneWithoutPipelineJobsInput | Git server object, retrievable by searching git servers by name (hyperplaneVCServers) and using resulting id in the following manner: { connect: { id: $gitServerId } } |
hyperplaneUserEmail | String! | Shakudo platform user email |
branchName | String | The name of the specific git branch that contains the pipeline YAML file and pipeline scripts. If commitID is not specified, the latest commit is used. If not specified, default branch is used. |
podSpec | JSON | Shakudo platform PodSpec config object as a JSON object |
Request Example
Variables
Response Object Fields
PipelineJob
Response Example
updatePipelineJob
Signature
updatePipelineJob(
data: PipelineJobUpdateInput!
where: PipelineJobWhereUniqueInput!
): PipelineJob
Function Description
Updates the database fields of a specific PipelineJob
. Check PipelineJobUpdateInput
to see how to do so.
Input Object Fields
Field | Type | Description |
---|---|---|
data | PipelineJobUpdateInput! | PipelineJob partial object that contains field values used to update, specified in the following format [field]: {[action]: [value]}. Check the PipelineJobUpdateInput type for specific fields and their descriptions. |
where | PipelineJobWhereUniqueInput! | Conditional values to filter for a specific PipelineJob object. Check the PipelineJobWhereUniqueInput type for specific fields and their descriptions. |
Request Example
mutation UpdatePipelineJob($jobId: String!, $parameterId: String!) {
updatePipelineJob(where: {id: $jobId},
data: {
parameters: {disconnect: {id: $parameterId}},
}) {
id
}
}
Variables
{
"jobId": "65e3a289-1371-4009-9fb3-c03bfbcbebd8",
"parameterId": "9f8f0524-6d67-4996-bd45-8a2434d97c1f"
}
Response Object Fields
PipelineJob
Response Example
{
"data": {
"updatePipelineJob": {
"id": "9f8f0524-6d67-4996-bd45-8a2434d97c1f"
}
}
}
updateParameter
Signature
updateParameter(
data: ParameterUpdateInput!
where: ParameterWhereUniqueInput!
): Parameter
Function Description
Updates the database fields of a specific Parameter
. Parameters are objects that represent environment variables within Shakudo resources like Jobs and Sessions.
Input Object Fields
Field | Type | Description |
---|---|---|
data | PipelineJobUpdateInput! | Parameter partial object that contains field values used to update, specified in the following format [field]: {[action]: [value]} , where [field] is the name of the field to update, [action] is the update action (e.g., set, increment, decrement, etc.), and [value] is the new value for the field. Check the ParameterUpdateInput type for specific fields and their descriptions. |
where | PipelineJobWhereUniqueInput! | Conditional values to filter for a specific PipelineJob object. Check the PipelineJobWhereUniqueInput type for specific fields and their descriptions. |
Request Example
mutation ($parameterId: String!, $keyValue: String, $valueValue: String) {
updateParameter(where: {id: $parameterId}, data: {
key: {set: $keyValue}
value: {set: $valueValue}
}) {
id
key
value
}
}
Sample Variables
{
"parameterId": "65e3a289-1371-4009-9fb3-c03bfbcbebd8",
"keyValue": "newKey",
"valueValue": "newValue"
}
Response Type
Parameter
Sample Response
{
"data": {
"updateParameter": {
"id": "65e3a289-1371-4009-9fb3-c03bfbcbebd8"
"keyValue": "newKey",
"valueValue": "newValue"
}
}
}
createHyperplaneVCServer
Signature
createHyperplaneVCServer(data: HyperplaneVCServerCreateInput!): HyperplaneVCServer!
Function Description
Creates a git server connected to a specific git repository to make it accessible on the Shakudo platform.
Input Object Fields
Field | Type | Description |
---|---|---|
data | HyperplaneVCServerCreateInput! | HyperplaneVCServer object that contains field values used to create a git server. Check the HyperplaneVCServerCreateInput type for specific fields and their descriptions. |
Request Example
mutation($data: HyperplaneVCServerCreateInput!) {
createHyperplaneVCServer(data: $data) {
id
defaultBranch
name
url
}
}
Variables
{
"data": {
"defaultBranch": "main",
"name": "examples-graphql-test",
"url": "git@github.com:org/sample.git"
}
}
Response Object Fields
HyperplaneVCServer
Response Example
{
"data": {
"createHyperplaneVCServer": {
"id": "3aff9f7c-c208-44e2-b389-495a11708349",
"defaultBranch": "main",
"name": "examples-graphql-test",
"pipelineJobs": [],
"status": "CREATING",
"url": "git@github.com:org/sample.git"
}
}
}
hyperplaneVCServers
Signature
hyperplaneVCServers(
cursor: HyperplaneVCServerWhereUniqueInput,
distinct: [HyperplaneVCServerScalarFieldEnum!],
orderBy: [HyperplaneVCServerOrderByInput!],
skip: Int,
take: Int,
where: HyperplaneVCServerWhereInput
): [HyperplaneVCServer!]!
Function Description
Retrieves a list of git server instances based on conditions provided, allowing for pagination (cursor and offset-based) and filtering.
Input Object Fields
Field | Type | Description |
---|---|---|
cursor | HyperplaneVCServerWhereUniqueInput | Starting git server value to paginate from using cursor-based pagination. The current result starts from this git server record. |
distinct | [HyperplaneVCServerScalarFieldEnum!] | List of fields where their values will remain distinct per record. |
orderBy | [HyperplaneVCServerOrderByInput!] | List of fields that will be used to order the results. The ordering precedence is determined by the location in the list. |
skip | Int | The number of records to skip from the original result. |
take | Int | The maximum number of records to show in the result. |
where | HyperplaneVCServerWhereInput | Conditional values to filter for a specific HyperplaneVCServer object. |
Request Example
query ($name: String!) {
hyperplaneVCServers(where: { name: {equals: $name } }){
id
defaultBranch
name
pipelineJobs {
id
}
status
url
serviceUrl
}
}
Variables
{
"name": "examples-graphql-test"
}
Response Object Fields
Array of HyperplaneVCServer
Response Example
{
"data": {
"hyperplaneVCServers": [
{
"id": "65e3a289-1371-4009-9fb3-c03bfbcbebd8",
"defaultBranch": "main",
"name": "examples-graphql-test",
"pipelineJobs": [],
"status": "CREATED",
"url": "git@github.com:org/sample.git",
"serviceUrl": "sample-service-url.namespace.svc.cluster.local"
}
]
}
}
getHyperhubSessionDefaultPodSpec
Signature
getHyperhubSessionPodSpec(
imageUrl: String = ""
userPvcName: String = ""
userEmail: String = ""
imageType: String = ""
): JSON
Function Description
Retrieves full PodSpec as JSON string for Sessions, which can be used and customized with granularity in createHyperHubSession
by itself, instead of relying on creating a PodSpec
object using createPodSpec
, which has more limited options.
Input Object Fields
Field | Type | Description |
---|---|---|
imageUrl | String | URL of custom image, same as imageHash |
userPvcName | String | Persistent volume name as found in Kubernetes. Typically includes the drive name found on the dashboard. Default: empty string, which corresponds with default drive claim-{user-email}. |
userEmail | String | Shakudo platform user account email |
imageType | String | Name of Shakudo platform Podspec/Image |
Note: userEmail is required in Request example to identify the respective user, but is not actually required to use query.
Request Example
query GetHyperhubSessionPodSpec($imageType: String, $userPvcName: String = "", $userEmail: String!, $imageUrl: String) {
getHyperhubSessionPodSpec(
imageType: $imageType,
userPvcName: $userPvcName,
userEmail: $userEmail,
imageUrl: $imageUrl
)
}
Variables
{
"imageType": "basic",
"userEmail": "demo@shakudo.io"
}
Response Object Fields
JSON
getPipelineJobPodSpec
Signature
getPipelineJobPodSpec(
parameters: ParametersInput
gitServerName: String = ""
noGitInit: Boolean = false
imageUrl: String = ""
userEmail: String = ""
noHyperplaneCommands: Boolean = false
commitId: String = ""
branchName: String = ""
pipelineYamlPath: String = ""
debuggable: Boolean = false
jobType: String = ""
): JSON
Function Description
Retrieves full PodSpec as JSON string for PipelineJobs, which can be used and customized with granularity in createPipelineJob
by itself, instead of relying on creating a PodSpec
object using createPodSpec
, which has more limited options.
Input Object Fields
Field | Type | Description |
---|---|---|
parameters | ParametersInput | List of key-value parameters that are injected into the Job environment and can be used as environment variables |
gitServerName | String ("" if not provided) | Git Server name, corresponds with HyperplaneVCServer.name , which is the display name assigned on the dashboard |
noGitInit | Boolean (false if not provided) | False if git server is to be set up using default Shakudo platform workflow. Default: false |
imageUrl | String ("" if not provided) | If the image is custom, then the image URL can be provided |
userEmail | String! (required) | Shakudo platform user account email |
noHyperplaneCommands | Boolean | False if using default Shakudo platform commands on job creation. Required to use Shakudo platform jobs through the pipeline YAML, but not required if the image has its own setup. Default: false |
commitId | String ("" if not provided) | The commit ID with the versions of the pipeline YAML file and pipeline scripts wanted. Ensure that both are present if the commit ID is used. If left empty, assume that the latest commit on the branch is used |
branchName | String | The name of the specific git branch that contains the pipeline YAML file and pipeline scripts. If commitID is not specified, the latest commit is used. If not specified, default branch is used. |
pipelineYamlPath | String ("" if not provided) | The relative path to the .yaml file used to run this pipeline job |
debuggable | Boolean (false if not provided) | Whether to enable SSH-based debugging for the job, check the following tutorial for more details |
jobType | String ("" if not provided) | Name of Shakudo platform Podspec/Image, default or custom |
Request Example
query GetPipelineJobPodSpec(
$parameters: ParametersInput
$gitServerName: String = ""
$noGitInit: Boolean = false
$imageUrl: String = ""
$userEmail: String!
$noHyperplaneCommands: Boolean = false
$commitId: String = ""
$branchName: String!
$pipelineYamlPath: String!
$debuggable: Boolean = false
$jobType: String!
) {
getPipelineJobPodSpec(
parameters: $parameters
gitServerName: $gitServerName
noGitInit: $noGitInit
imageUrl: $imageUrl
userEmail: $userEmail
noHyperplaneCommands: $noHyperplaneCommands
commitId: $commitId
branchName: $branchName
pipelineYamlPath: $pipelineYamlPath
debuggable: $debuggable
jobType: $jobType
)
}
Variables
{
"userEmail": "demo@shakudo.io",
"branchName": "main",
"pipelineYamlPath": "example/pipeline.yaml",
"jobType": "basic"
}
Response Object Fields
JSON
getUserServicePodSpec
Signature
getUserServicePodSpec(
exposedPort: String = "8787"
parameters: ParametersInput
gitServerName: String = ""
noGitInit: Boolean = false
imageUrl: String = ""
userEmail: String = ""
noHyperplaneCommands: Boolean = false
commitId: String = ""
branchName: String = ""
pipelineYamlPath: String = ""
jobType: String = ""
): JSON
Function Description
Retrieves full PodSpec as JSON string for Services, which can be used and customized with granularity in createPipelineJob
by itself, instead of relying on the parameters found in createPodSpec
.
Input Object Fields
Field | Type | Description |
---|---|---|
exposedPort | String | The exposed port for the job. Default value is 8787. |
parameters | ParametersInput | List of key-value parameters that are injected into the job environment and can be used as environment variables. |
gitServerName | String | The name of the Git server. It corresponds with the display name assigned on the HyperplaneVCServer dashboard. Default value is an empty string. |
noGitInit | Boolean | Specifies whether to set up the Git server using the default Shakudo platform workflow. Default value is false. |
imageUrl | String | The URL of a custom image. If the image is custom, this field can be provided. Default: "" (empty string) |
userEmail | String | The email of the Shakudo platform user account. This field is required. |
noHyperplaneCommands | Boolean | Specifies whether to use default Shakudo platform commands on job creation. It is required to use Shakudo platform jobs through the pipeline YAML, but not required if the image has its own setup. Default value is false. |
commitId | String | The commit ID with the versions of the pipeline YAML file and pipeline scripts wanted. Ensure that both are present if commit ID is used. If left empty, assume that the latest commit on the branch is used. Default value is an empty string. |
branchName | String | The name of the specific Git branch that contains the pipeline YAML file and pipeline scripts. If commitId is not specified, the latest commit is used. This field is required. |
pipelineYamlPath | String | The relative path to the .yaml file used to run this pipeline job. This field is required. |
jobType | String | The name of the Shakudo platform Podspec/Image. If the empty string is provided, the Podspec used will be basic. |
Request Example
query GetUserServicePodSpec(
$exposedPort: String! = "8787"
$parameters: ParametersInput
$gitServerName: String
$noGitInit: Boolean
$imageUrl: String
$userEmail: String!
$noHyperplaneCommands: Boolean
$commitId: String
$branchName: String!
$pipelineYamlPath: String!
$jobType: String!
) {
getUserServicePodSpec(
exposedPort: $exposedPort
parameters: $parameters
gitServerName: $gitServerName
noGitInit: $noGitInit
imageUrl: $imageUrl
userEmail: $userEmail
noHyperplaneCommands: $noHyperplaneCommands
commitId: $commitId
branchName: $branchName
pipelineYamlPath: $pipelineYamlPath
jobType: $jobType
)
}
Variables
{
"userEmail": "demo@shakudo.io",
"branchName": "main",
"pipelineYamlPath": "example/pipeline.yaml",
"jobType": "basic"
}
Response Object Fields
JSON