1. Clone the git repository:
git clone https://github.com/kubeflow/manifests.git cd manifests
2. Deploy the Kubeflow components: To install Kubeflow on your cluster, execute the following commands:
kubectl apply -k ./kubeflow
This command applies the necessary resources to your Kubernetes cluster.
3. Check the status of the Kubeflow components:
kubectl get pods -n kubeflow
To know more about Kubernetes Pods, Click Here.
Step 3: Access Kubeflow Dashboard
Once it is deployed, you can access the dashboard, which is a web interface for interacting with your Kubeflow components.
1. Set up port forwarding:
To access the dashboard, use kubectl to forward the Kubeflow services to your local machine:
kubectl port-forward svc/istio-ingressgateway -n istio-system 8080:80
2. Access the dashboard:
Open a browser and go to http://localhost:8080. You should see the dashboard where you can start managing your machine learning pipelines and models.
Step 4: Create Your First ML Pipeline
Kubeflow Pipelines allow you to define, manage, and monitor end-to-end ML workflows. Here’s a simple example of how you can create a pipeline.
1. Create a Pipeline:
In the dashboard, you can create a pipeline using the Python SDK. Install the Kubeflow Pipelines SDK:
pip install kfp
Then, create a Python file (my_pipeline.py) with a basic pipeline definition:
import kfp
from kfp import dsl
@dsl.pipeline(
name='Simple ML Pipeline',
description='A simple pipeline that trains a model.'
)
def simple_pipeline():
# Define your pipeline steps here
pass
if __name__ == '__main__':
kfp.compiler.Compiler().compile(simple_pipeline, 'simple_pipeline.zip')
2. Upload the Pipeline:
After compiling your pipeline into a .zip file, you can upload it to the dashboard via the Pipelines UI. Click “Create Pipeline” and select the compiled pipeline file.
3. Run the Pipeline:
Once uploaded, you can start the pipeline by clicking “Start” from the UI. You will be able to track the progress of the pipeline as it runs.




