Quarkus MCP Server that indexes Red Hat product documentation and the "IA Development From Zero To Hero" workshop for OpenShift Lightspeed.
English | Espanol
oc CLI authenticated to the clusterhelm CLI (optional, for Helm-based installation)helm repo add showroom-docs-mcp \
https://maximilianopizarro.github.io/showroom-docs-mcp/
helm install showroom-docs-mcp showroom-docs-mcp/showroom-docs-mcp \
--namespace openshift-lightspeed \
--set image.pullPolicy=Always
The chart includes a values.schema.json for input validation. Key configurable values:
| Parameter | Default | Description |
|---|---|---|
replicaCount |
1 |
Number of replicas |
image.repository |
quay.io/maximilianopizarro/showroom-docs-mcp |
Container image |
image.tag |
latest |
Image tag |
image.pullPolicy |
Always |
Pull policy (Always, IfNotPresent, Never) |
namespace |
openshift-lightspeed |
Target namespace |
service.type |
ClusterIP |
Service type |
service.port |
8080 |
Service port |
resources.requests.cpu |
100m |
CPU request |
resources.requests.memory |
256Mi |
Memory request |
resources.limits.cpu |
500m |
CPU limit |
resources.limits.memory |
512Mi |
Memory limit |
inspector.enabled |
true |
Enable MCP Inspector deployment |
inspector.route.timeout |
300s |
Route timeout for MCP connections |
olsConfig.enabled |
false |
Enable OLSConfig integration |
olsConfig.mcpServerName |
showroom-docs-mcp |
MCP server name in OLSConfig |
olsConfig.mcpServerTimeout |
10 |
MCP timeout (seconds) |
Example with custom values:
helm install showroom-docs-mcp showroom-docs-mcp/showroom-docs-mcp \
--namespace openshift-lightspeed \
--set image.pullPolicy=Always \
--set resources.limits.memory=1Gi \
--set replicaCount=2
# Clone the repository
git clone https://github.com/maximilianoPizarro/showroom-docs-mcp.git
cd showroom-docs-mcp
# Apply manifests
oc apply -f k8s/deployment.yaml
Add the MCP server to your OLSConfig. Note: use /mcp (Streamable HTTP), not /mcp/sse:
apiVersion: ols.openshift.io/v1alpha1
kind: OLSConfig
metadata:
name: cluster
spec:
featureGates:
- MCPServer
mcpServers:
- name: showroom-docs-mcp
timeout: 10
url: 'http://showroom-docs-mcp.openshift-lightspeed.svc.cluster.local:8080/mcp'
Apply:
oc apply -f cluster-ols.yml
The OLS operator will automatically restart pods with the new configuration.
# Check that the pod is running
oc get pods -n openshift-lightspeed -l app=showroom-docs-mcp
# Verify health
oc exec -n openshift-lightspeed deploy/showroom-docs-mcp -- \
curl -s http://localhost:8080/q/health/ready
# Check logs (should show no errors)
oc logs -n openshift-lightspeed -l app=showroom-docs-mcp
# Verify MCP tools are loaded by OLS
oc logs -n openshift-lightspeed deploy/lightspeed-app-server \
-c lightspeed-service-api | grep "showroom-docs-mcp"
Once deployed, open the OpenShift Lightspeed chat in the console and try these questions:
For Red Hat Developer Sandbox, deploy the MCP server with the Inspector to test tools with Granite:
helm repo add showroom-docs-mcp \
https://maximilianopizarro.github.io/showroom-docs-mcp/
helm install showroom-docs-mcp showroom-docs-mcp/showroom-docs-mcp \
--set namespace=$(oc project -q) \
--set image.pullPolicy=Always
The Inspector auto-connects to the MCP server. To enable the LiteLLM proxy for OpenAI-compatible access to Granite:
helm upgrade showroom-docs-mcp showroom-docs-mcp/showroom-docs-mcp \
--set namespace=$(oc project -q) \
--set litellm.enabled=true \
--set litellm.granite.apiKey=$(oc whoami -t)
Test the proxy:
LITELLM_HOST=$(oc get route showroom-docs-mcp-litellm -o jsonpath='{.spec.host}')
curl -s https://${LITELLM_HOST}/v1/chat/completions \
-H "Authorization: Bearer sk-showroom-mcp-1234" \
-H "Content-Type: application/json" \
-d '{"model":"granite","messages":[{"role":"user","content":"Hello"}]}'
Note: The OAuth token expires after ~24h. Refresh with
--set litellm.granite.apiKey=$(oc whoami -t).
cd showroom-docs-mcp
./mvnw quarkus:dev
# MCP available at http://localhost:8080/mcp
# SSE transport at http://localhost:8080/mcp/sse
# Health at http://localhost:8080/q/health/ready