GitHub Actions CI/CD 파이프라인 및 Kustomize 다중 환경 배포 설정

- GitHub Actions workflow로 백엔드 서비스 자동 빌드/배포 구성
- Kustomize를 통한 dev/staging/prod 환경별 설정 관리
- 각 마이크로서비스별 Dockerfile 추가
- 배포 자동화 스크립트 및 환경 변수 설정
- CI/CD 가이드 문서 작성
This commit is contained in:
wonho 2025-10-29 13:23:41 +09:00
parent 1b73d2880b
commit e7ffdcfe44
87 changed files with 4117 additions and 35 deletions

186
.github/README.md vendored Normal file
View File

@ -0,0 +1,186 @@
# KT Event Marketing - CI/CD Infrastructure
이 디렉토리는 KT Event Marketing 백엔드 서비스의 CI/CD 인프라를 포함합니다.
## 디렉토리 구조
```
.github/
├── README.md # 이 파일
├── workflows/
│ └── backend-cicd.yaml # GitHub Actions 워크플로우
├── kustomize/ # Kubernetes 매니페스트 관리
│ ├── base/ # 기본 리소스 정의
│ │ ├── kustomization.yaml
│ │ ├── cm-common.yaml
│ │ ├── secret-common.yaml
│ │ ├── secret-imagepull.yaml
│ │ ├── ingress.yaml
│ │ └── {service}-*.yaml # 각 서비스별 리소스
│ └── overlays/ # 환경별 설정
│ ├── dev/
│ │ ├── kustomization.yaml
│ │ └── *-patch.yaml # 1 replica, 256Mi-1024Mi
│ ├── staging/
│ │ ├── kustomization.yaml
│ │ └── *-patch.yaml # 2 replicas, 512Mi-2048Mi
│ └── prod/
│ ├── kustomization.yaml
│ └── *-patch.yaml # 3 replicas, 1024Mi-4096Mi
├── config/
│ ├── deploy_env_vars_dev # Dev 환경 변수
│ ├── deploy_env_vars_staging # Staging 환경 변수
│ └── deploy_env_vars_prod # Prod 환경 변수
└── scripts/
├── deploy.sh # 수동 배포 스크립트
├── generate-patches.sh # 패치 파일 생성 스크립트
└── copy-manifests-to-base.py # 매니페스트 복사 스크립트
```
## 주요 파일 설명
### workflows/backend-cicd.yaml
GitHub Actions 워크플로우 정의 파일입니다.
**트리거**:
- develop 브랜치 push → dev 환경 배포
- main 브랜치 push → prod 환경 배포
- Manual workflow dispatch → 원하는 환경과 서비스 선택
**Jobs**:
1. `detect-changes`: 변경된 서비스 감지
2. `build-and-push`: 서비스 빌드 및 ACR 푸시
3. `deploy`: AKS에 배포
4. `notify`: 배포 결과 알림
### kustomize/base/kustomization.yaml
모든 환경에서 공통으로 사용하는 기본 리소스를 정의합니다.
**포함 리소스**:
- Common ConfigMaps and Secrets
- Ingress
- 7개 서비스의 Deployment, Service, ConfigMap, Secret
### kustomize/overlays/{env}/kustomization.yaml
환경별 설정을 오버라이드합니다.
**주요 차이점**:
- 이미지 태그 (dev/staging/prod)
- Replica 수 (1/2/3)
- 리소스 할당량 (작음/중간/큼)
### scripts/deploy.sh
로컬에서 수동 배포를 위한 스크립트입니다.
**사용법**:
```bash
# 모든 서비스를 dev 환경에 배포
./scripts/deploy.sh dev
# 특정 서비스만 prod 환경에 배포
./scripts/deploy.sh prod user-service
```
## 배포 프로세스
### 자동 배포 (GitHub Actions)
1. **Dev 환경**:
```bash
git checkout develop
git push origin develop
```
2. **Prod 환경**:
```bash
git checkout main
git merge develop
git push origin main
```
3. **수동 배포**:
- GitHub Actions UI → Run workflow
- Environment 선택 (dev/staging/prod)
- Service 선택 (all 또는 특정 서비스)
### 수동 배포 (로컬)
```bash
# 사전 요구사항: Azure CLI, kubectl, kustomize 설치
# Azure 로그인 필요
# Dev 환경에 모든 서비스 배포
./.github/scripts/deploy.sh dev
# Prod 환경에 user-service만 배포
./.github/scripts/deploy.sh prod user-service
```
## 환경별 설정
| 환경 | 브랜치 | 이미지 태그 | Replicas | CPU Request | Memory Request |
|------|--------|-------------|----------|-------------|----------------|
| Dev | develop | dev | 1 | 256m | 256Mi |
| Staging | manual | staging | 2 | 512m | 512Mi |
| Prod | main | prod | 3 | 1024m | 1024Mi |
## 서비스 목록
1. **user-service** (8081) - 사용자 관리
2. **event-service** (8082) - 이벤트 관리
3. **ai-service** (8083) - AI 기반 콘텐츠 생성
4. **content-service** (8084) - 콘텐츠 관리
5. **distribution-service** (8085) - 경품 배포
6. **participation-service** (8086) - 이벤트 참여
7. **analytics-service** (8087) - 분석 및 통계
## 모니터링
### Pod 상태 확인
```bash
kubectl get pods -n kt-event-marketing
```
### 로그 확인
```bash
# 실시간 로그
kubectl logs -n kt-event-marketing -l app=user-service -f
# 이전 컨테이너 로그
kubectl logs -n kt-event-marketing <pod-name> --previous
```
### 리소스 사용량
```bash
# Pod 리소스
kubectl top pods -n kt-event-marketing
# Node 리소스
kubectl top nodes
```
## 트러블슈팅
상세한 트러블슈팅 가이드는 [deployment/cicd/CICD-GUIDE.md](../../deployment/cicd/CICD-GUIDE.md)를 참조하세요.
**주요 문제 해결**:
- ImagePullBackOff → ACR Secret 확인
- CrashLoopBackOff → 로그 확인 및 환경 변수 검증
- Readiness Probe Failed → Context Path 및 Actuator 경로 확인
## 롤백
```bash
# 이전 버전으로 롤백
kubectl rollout undo deployment/user-service -n kt-event-marketing
# 특정 리비전으로 롤백
kubectl rollout undo deployment/user-service --to-revision=2 -n kt-event-marketing
```
## 참고 자료
- [CI/CD 가이드 (한글)](../../deployment/cicd/CICD-GUIDE.md)
- [GitHub Actions 공식 문서](https://docs.github.com/en/actions)
- [Kustomize 공식 문서](https://kustomize.io/)
- [Azure AKS 공식 문서](https://docs.microsoft.com/en-us/azure/aks/)

11
.github/config/deploy_env_vars_dev vendored Normal file
View File

@ -0,0 +1,11 @@
# Development Environment Variables
ENVIRONMENT=dev
ACR_NAME=acrdigitalgarage01
RESOURCE_GROUP=rg-digitalgarage-01
AKS_CLUSTER=aks-digitalgarage-01
NAMESPACE=kt-event-marketing
REPLICAS=1
CPU_REQUEST=256m
MEMORY_REQUEST=256Mi
CPU_LIMIT=1024m
MEMORY_LIMIT=1024Mi

11
.github/config/deploy_env_vars_prod vendored Normal file
View File

@ -0,0 +1,11 @@
# Production Environment Variables
ENVIRONMENT=prod
ACR_NAME=acrdigitalgarage01
RESOURCE_GROUP=rg-digitalgarage-01
AKS_CLUSTER=aks-digitalgarage-01
NAMESPACE=kt-event-marketing
REPLICAS=3
CPU_REQUEST=1024m
MEMORY_REQUEST=1024Mi
CPU_LIMIT=4096m
MEMORY_LIMIT=4096Mi

11
.github/config/deploy_env_vars_staging vendored Normal file
View File

@ -0,0 +1,11 @@
# Staging Environment Variables
ENVIRONMENT=staging
ACR_NAME=acrdigitalgarage01
RESOURCE_GROUP=rg-digitalgarage-01
AKS_CLUSTER=aks-digitalgarage-01
NAMESPACE=kt-event-marketing
REPLICAS=2
CPU_REQUEST=512m
MEMORY_REQUEST=512Mi
CPU_LIMIT=2048m
MEMORY_LIMIT=2048Mi

View File

@ -0,0 +1,55 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-ai-service
data:
# Server Configuration
SERVER_PORT: "8083"
# Redis Configuration (service-specific)
REDIS_DATABASE: "3"
REDIS_TIMEOUT: "3000"
REDIS_POOL_MIN: "2"
# Kafka Configuration (service-specific)
KAFKA_CONSUMER_GROUP: "ai-service-consumers"
# Kafka Topics Configuration
KAFKA_TOPICS_AI_JOB: "ai-event-generation-job"
KAFKA_TOPICS_AI_JOB_DLQ: "ai-event-generation-job-dlq"
# AI Provider Configuration
AI_PROVIDER: "CLAUDE"
AI_CLAUDE_API_URL: "https://api.anthropic.com/v1/messages"
AI_CLAUDE_ANTHROPIC_VERSION: "2023-06-01"
AI_CLAUDE_MODEL: "claude-sonnet-4-5-20250929"
AI_CLAUDE_MAX_TOKENS: "4096"
AI_CLAUDE_TEMPERATURE: "0.7"
AI_CLAUDE_TIMEOUT: "300000"
# Circuit Breaker Configuration
RESILIENCE4J_CIRCUITBREAKER_FAILURE_RATE_THRESHOLD: "50"
RESILIENCE4J_CIRCUITBREAKER_SLOW_CALL_RATE_THRESHOLD: "50"
RESILIENCE4J_CIRCUITBREAKER_SLOW_CALL_DURATION_THRESHOLD: "60s"
RESILIENCE4J_CIRCUITBREAKER_PERMITTED_CALLS_HALF_OPEN: "3"
RESILIENCE4J_CIRCUITBREAKER_SLIDING_WINDOW_SIZE: "10"
RESILIENCE4J_CIRCUITBREAKER_MINIMUM_CALLS: "5"
RESILIENCE4J_CIRCUITBREAKER_WAIT_DURATION_OPEN: "60s"
RESILIENCE4J_TIMELIMITER_TIMEOUT_DURATION: "300s"
# Redis Cache TTL Configuration (seconds)
CACHE_TTL_RECOMMENDATION: "86400"
CACHE_TTL_JOB_STATUS: "86400"
CACHE_TTL_TREND: "3600"
CACHE_TTL_FALLBACK: "604800"
# Logging Configuration
LOG_LEVEL_ROOT: "INFO"
LOG_LEVEL_AI: "DEBUG"
LOG_LEVEL_KAFKA: "INFO"
LOG_LEVEL_REDIS: "INFO"
LOG_LEVEL_RESILIENCE4J: "DEBUG"
LOG_FILE_NAME: "logs/ai-service.log"
LOG_FILE_MAX_SIZE: "10MB"
LOG_FILE_MAX_HISTORY: "7"
LOG_FILE_TOTAL_CAP: "100MB"

View File

@ -0,0 +1,62 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-service
labels:
app: ai-service
spec:
replicas: 1
selector:
matchLabels:
app: ai-service
template:
metadata:
labels:
app: ai-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: ai-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8083
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-ai-service
- secretRef:
name: secret-common
- secretRef:
name: secret-ai-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health
port: 8083
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8083
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8083
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-ai-service
type: Opaque
stringData:
# Claude API Key
AI_CLAUDE_API_KEY: "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"

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: ai-service
labels:
app: ai-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8083
protocol: TCP
name: http
selector:
app: ai-service

View File

@ -0,0 +1,37 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-analytics-service
data:
# Server Configuration
SERVER_PORT: "8086"
# Database Configuration
DB_HOST: "analytic-postgresql"
DB_PORT: "5432"
DB_NAME: "analytics_db"
DB_USERNAME: "eventuser"
# Redis Configuration (service-specific)
REDIS_DATABASE: "5"
# Kafka Configuration (service-specific)
KAFKA_ENABLED: "true"
KAFKA_CONSUMER_GROUP_ID: "analytics-service"
# Sample Data Configuration (MVP only)
SAMPLE_DATA_ENABLED: "true"
# Batch Scheduler Configuration
BATCH_REFRESH_INTERVAL: "300000" # 5분 (밀리초)
BATCH_INITIAL_DELAY: "30000" # 30초 (밀리초)
BATCH_ENABLED: "true"
# Logging Configuration
LOG_LEVEL_APP: "INFO"
LOG_LEVEL_WEB: "INFO"
LOG_LEVEL_SQL: "WARN"
LOG_LEVEL_SQL_TYPE: "WARN"
SHOW_SQL: "false"
DDL_AUTO: "update"
LOG_FILE: "logs/analytics-service.log"

View File

@ -0,0 +1,62 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: analytics-service
labels:
app: analytics-service
spec:
replicas: 1
selector:
matchLabels:
app: analytics-service
template:
metadata:
labels:
app: analytics-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: analytics-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8086
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-analytics-service
- secretRef:
name: secret-common
- secretRef:
name: secret-analytics-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health/liveness
port: 8086
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 30
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8086
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8086
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 3

View File

@ -0,0 +1,7 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-analytics-service
type: Opaque
stringData:
DB_PASSWORD: "Hi5Jessica!"

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: analytics-service
labels:
app: analytics-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8086
protocol: TCP
name: http
selector:
app: analytics-service

46
.github/kustomize/base/cm-common.yaml vendored Normal file
View File

@ -0,0 +1,46 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-common
data:
# Redis Configuration
REDIS_ENABLED: "true"
REDIS_HOST: "redis"
REDIS_PORT: "6379"
REDIS_TIMEOUT: "2000ms"
REDIS_POOL_MAX: "8"
REDIS_POOL_IDLE: "8"
REDIS_POOL_MIN: "0"
REDIS_POOL_WAIT: "-1ms"
# Kafka Configuration
KAFKA_BOOTSTRAP_SERVERS: "20.249.182.13:9095,4.217.131.59:9095"
EXCLUDE_KAFKA: ""
EXCLUDE_REDIS: ""
# CORS Configuration
CORS_ALLOWED_ORIGINS: "http://localhost:8081,http://localhost:8082,http://localhost:8083,http://localhost:8084,http://kt-event-marketing.20.214.196.128.nip.io"
CORS_ALLOWED_METHODS: "GET,POST,PUT,DELETE,OPTIONS,PATCH"
CORS_ALLOWED_HEADERS: "*"
CORS_ALLOW_CREDENTIALS: "true"
CORS_MAX_AGE: "3600"
# JWT Configuration
JWT_ACCESS_TOKEN_VALIDITY: "604800000"
JWT_REFRESH_TOKEN_VALIDITY: "86400000"
# JPA Configuration
DDL_AUTO: "update"
SHOW_SQL: "false"
JPA_DIALECT: "org.hibernate.dialect.PostgreSQLDialect"
H2_CONSOLE_ENABLED: "false"
# Logging Configuration
LOG_LEVEL_APP: "INFO"
LOG_LEVEL_WEB: "INFO"
LOG_LEVEL_SQL: "WARN"
LOG_LEVEL_SQL_TYPE: "WARN"
LOG_LEVEL_ROOT: "INFO"
LOG_FILE_MAX_SIZE: "10MB"
LOG_FILE_MAX_HISTORY: "7"
LOG_FILE_TOTAL_CAP: "100MB"

View File

@ -0,0 +1,24 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-content-service
data:
# Server Configuration
SERVER_PORT: "8084"
# Redis Configuration (service-specific)
REDIS_DATABASE: "1"
# Replicate API Configuration (Stable Diffusion)
REPLICATE_API_URL: "https://api.replicate.com"
REPLICATE_MODEL_VERSION: "stability-ai/sdxl:39ed52f2a78e934b3ba6e2a89f5b1c712de7dfea535525255b1aa35c5565e08b"
# HuggingFace API Configuration
HUGGINGFACE_API_URL: "https://api-inference.huggingface.co"
HUGGINGFACE_MODEL: "runwayml/stable-diffusion-v1-5"
# Azure Blob Storage Configuration
AZURE_CONTAINER_NAME: "content-images"
# Logging Configuration
LOG_FILE_PATH: "logs/content-service.log"

View File

@ -0,0 +1,62 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: content-service
labels:
app: content-service
spec:
replicas: 1
selector:
matchLabels:
app: content-service
template:
metadata:
labels:
app: content-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: content-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8084
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-content-service
- secretRef:
name: secret-common
- secretRef:
name: secret-content-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /api/v1/content/actuator/health
port: 8084
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /api/v1/content/actuator/health/readiness
port: 8084
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /api/v1/content/actuator/health/liveness
port: 8084
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -0,0 +1,14 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-content-service
type: Opaque
stringData:
# Azure Blob Storage Connection String
AZURE_STORAGE_CONNECTION_STRING: "DefaultEndpointsProtocol=https;AccountName=blobkteventstorage;AccountKey=tcBN7mAfojbl0uGsOpU7RNuKNhHnzmwDiWjN31liSMVSrWaEK+HHnYKZrjBXXAC6ZPsuxUDlsf8x+AStd++QYg==;EndpointSuffix=core.windows.net"
# Replicate API Token
REPLICATE_API_TOKEN: ""
# HuggingFace API Token
HUGGINGFACE_API_TOKEN: ""

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: content-service
labels:
app: content-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8084
protocol: TCP
name: http
selector:
app: content-service

View File

@ -0,0 +1,28 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-distribution-service
data:
# Server Configuration
SERVER_PORT: "8085"
# Database Configuration
DB_HOST: "distribution-postgresql"
DB_PORT: "5432"
DB_NAME: "distributiondb"
DB_USERNAME: "eventuser"
# Kafka Configuration
KAFKA_ENABLED: "true"
KAFKA_CONSUMER_GROUP: "distribution-service"
# External Channel APIs
URIDONGNETV_API_URL: "http://localhost:9001/api/uridongnetv"
RINGOBIZ_API_URL: "http://localhost:9002/api/ringobiz"
GINITV_API_URL: "http://localhost:9003/api/ginitv"
INSTAGRAM_API_URL: "http://localhost:9004/api/instagram"
NAVER_API_URL: "http://localhost:9005/api/naver"
KAKAO_API_URL: "http://localhost:9006/api/kakao"
# Logging Configuration
LOG_FILE: "logs/distribution-service.log"

View File

@ -0,0 +1,62 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: distribution-service
labels:
app: distribution-service
spec:
replicas: 1
selector:
matchLabels:
app: distribution-service
template:
metadata:
labels:
app: distribution-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: distribution-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8085
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-distribution-service
- secretRef:
name: secret-common
- secretRef:
name: secret-distribution-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health
port: 8085
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8085
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8085
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -0,0 +1,7 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-distribution-service
type: Opaque
stringData:
DB_PASSWORD: "Hi5Jessica!"

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: distribution-service
labels:
app: distribution-service
spec:
type: ClusterIP
selector:
app: distribution-service
ports:
- name: http
port: 80
targetPort: 8085
protocol: TCP

View File

@ -0,0 +1,28 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-event-service
data:
# Server Configuration
SERVER_PORT: "8080"
# Database Configuration
DB_HOST: "event-postgresql"
DB_PORT: "5432"
DB_NAME: "eventdb"
DB_USERNAME: "eventuser"
# Redis Configuration (service-specific)
REDIS_DATABASE: "2"
# Kafka Configuration (service-specific)
KAFKA_CONSUMER_GROUP: "event-service-consumers"
# Service URLs
CONTENT_SERVICE_URL: "http://content-service"
DISTRIBUTION_SERVICE_URL: "http://distribution-service"
# Logging Configuration
LOG_LEVEL: "INFO"
SQL_LOG_LEVEL: "WARN"
LOG_FILE: "logs/event-service.log"

View File

@ -0,0 +1,62 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service
labels:
app: event-service
spec:
replicas: 1
selector:
matchLabels:
app: event-service
template:
metadata:
labels:
app: event-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: event-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-event-service
- secretRef:
name: secret-common
- secretRef:
name: secret-event-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-event-service
type: Opaque
stringData:
# Database Password
DB_PASSWORD: "Hi5Jessica!"

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: event-service
labels:
app: event-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: event-service

116
.github/kustomize/base/ingress.yaml vendored Normal file
View File

@ -0,0 +1,116 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kt-event-marketing
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx
rules:
- host: kt-event-marketing-api.20.214.196.128.nip.io
http:
paths:
# User Service
- path: /api/v1/users
pathType: Prefix
backend:
service:
name: user-service
port:
number: 80
# Content Service
- path: /api/v1/content
pathType: Prefix
backend:
service:
name: content-service
port:
number: 80
# Event Service
- path: /api/v1/events
pathType: Prefix
backend:
service:
name: event-service
port:
number: 80
- path: /api/v1/jobs
pathType: Prefix
backend:
service:
name: event-service
port:
number: 80
- path: /api/v1/redis-test
pathType: Prefix
backend:
service:
name: event-service
port:
number: 80
# AI Service
- path: /api/v1/ai-service
pathType: Prefix
backend:
service:
name: ai-service
port:
number: 80
# Participation Service
- path: /api/v1/participations
pathType: Prefix
backend:
service:
name: participation-service
port:
number: 80
- path: /api/v1/winners
pathType: Prefix
backend:
service:
name: participation-service
port:
number: 80
- path: /debug
pathType: Prefix
backend:
service:
name: participation-service
port:
number: 80
# Analytics Service - Event Analytics
- path: /api/v1/events/([0-9]+)/analytics
pathType: ImplementationSpecific
backend:
service:
name: analytics-service
port:
number: 80
# Analytics Service - User Analytics
- path: /api/v1/users/([0-9]+)/analytics
pathType: ImplementationSpecific
backend:
service:
name: analytics-service
port:
number: 80
# Distribution Service
- path: /distribution
pathType: Prefix
backend:
service:
name: distribution-service
port:
number: 80

View File

@ -0,0 +1,76 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Common resources
resources:
# Common ConfigMaps and Secrets
- cm-common.yaml
- secret-common.yaml
- secret-imagepull.yaml
# Ingress
- ingress.yaml
# user-service
- user-service-deployment.yaml
- user-service-service.yaml
- user-service-cm-user-service.yaml
- user-service-secret-user-service.yaml
# event-service
- event-service-deployment.yaml
- event-service-service.yaml
- event-service-cm-event-service.yaml
- event-service-secret-event-service.yaml
# ai-service
- ai-service-deployment.yaml
- ai-service-service.yaml
- ai-service-cm-ai-service.yaml
- ai-service-secret-ai-service.yaml
# content-service
- content-service-deployment.yaml
- content-service-service.yaml
- content-service-cm-content-service.yaml
- content-service-secret-content-service.yaml
# distribution-service
- distribution-service-deployment.yaml
- distribution-service-service.yaml
- distribution-service-cm-distribution-service.yaml
- distribution-service-secret-distribution-service.yaml
# participation-service
- participation-service-deployment.yaml
- participation-service-service.yaml
- participation-service-cm-participation-service.yaml
- participation-service-secret-participation-service.yaml
# analytics-service
- analytics-service-deployment.yaml
- analytics-service-service.yaml
- analytics-service-cm-analytics-service.yaml
- analytics-service-secret-analytics-service.yaml
# Common labels for all resources
commonLabels:
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/part-of: kt-event-marketing
# Image tag replacement (will be overridden by overlays)
images:
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service
newTag: latest

View File

@ -0,0 +1,24 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-participation-service
data:
# Server Configuration
SERVER_PORT: "8084"
# Database Configuration
DB_HOST: "participation-postgresql"
DB_PORT: "5432"
DB_NAME: "participationdb"
DB_USERNAME: "eventuser"
# Redis Configuration (service-specific)
REDIS_DATABASE: "4"
# Kafka Configuration (service-specific)
KAFKA_CONSUMER_GROUP: "participation-service-consumers"
# Logging Configuration
LOG_LEVEL: "INFO"
SHOW_SQL: "false"
LOG_FILE: "logs/participation-service.log"

View File

@ -0,0 +1,62 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: participation-service
labels:
app: participation-service
spec:
replicas: 1
selector:
matchLabels:
app: participation-service
template:
metadata:
labels:
app: participation-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: participation-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8084
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-participation-service
- secretRef:
name: secret-common
- secretRef:
name: secret-participation-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health/liveness
port: 8084
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 30
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8084
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8084
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 3

View File

@ -0,0 +1,7 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-participation-service
type: Opaque
stringData:
DB_PASSWORD: "Hi5Jessica!"

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: participation-service
labels:
app: participation-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8084
protocol: TCP
name: http
selector:
app: participation-service

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-common
type: Opaque
stringData:
# Redis Password
REDIS_PASSWORD: "Hi5Jessica!"
# JWT Secret
JWT_SECRET: "QL0czzXckz18kHnxpaTDoWFkq+3qKO7VQXeNvf2bOoU="

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Secret
metadata:
name: kt-event-marketing
type: kubernetes.io/dockerconfigjson
stringData:
.dockerconfigjson: |
{
"auths": {
"acrdigitalgarage01.azurecr.io": {
"username": "acrdigitalgarage01",
"password": "+OY+rmOagorjWvQe/tTk6oqvnZI8SmNbY/Y2o5EDcY+ACRDCDbYk",
"auth": "YWNyZGlnaXRhbGdhcmFnZTAxOitPWStybU9hZ29yald2UWUvdFRrNm9xdm5aSThTbU5iWS9ZMm81RURjWStBQ1JEQ0RiWWs="
}
}
}

View File

@ -0,0 +1,31 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-user-service
data:
# Server Configuration
SERVER_PORT: "8081"
# Database Configuration
DB_URL: "jdbc:postgresql://user-postgresql:5432/userdb"
DB_HOST: "user-postgresql"
DB_PORT: "5432"
DB_NAME: "userdb"
DB_USERNAME: "eventuser"
DB_DRIVER: "org.postgresql.Driver"
DB_KIND: "postgresql"
DB_POOL_MAX: "20"
DB_POOL_MIN: "5"
DB_CONN_TIMEOUT: "30000"
DB_IDLE_TIMEOUT: "600000"
DB_MAX_LIFETIME: "1800000"
DB_LEAK_THRESHOLD: "60000"
# Redis Configuration (service-specific)
REDIS_DATABASE: "0"
# Kafka Configuration (service-specific)
KAFKA_CONSUMER_GROUP: "user-service-consumers"
# Logging Configuration
LOG_FILE_PATH: "logs/user-service.log"

View File

@ -0,0 +1,62 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
labels:
app: user-service
spec:
replicas: 1
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: user-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8081
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-user-service
- secretRef:
name: secret-common
- secretRef:
name: secret-user-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health
port: 8081
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8081
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8081
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-user-service
type: Opaque
stringData:
# Database Password
DB_PASSWORD: "Hi5Jessica!"

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: user-service
labels:
app: user-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
selector:
app: user-service

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-service
spec:
replicas: 1
template:
spec:
containers:
- name: ai-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: analytics-service
spec:
replicas: 1
template:
spec:
containers:
- name: analytics-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: content-service
spec:
replicas: 1
template:
spec:
containers:
- name: content-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: distribution-service
spec:
replicas: 1
template:
spec:
containers:
- name: distribution-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service
spec:
replicas: 1
template:
spec:
containers:
- name: event-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -0,0 +1,38 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kt-event-marketing
bases:
- ../../base
# Environment-specific labels
commonLabels:
environment: dev
# Environment-specific patches
patchesStrategicMerge:
- user-service-patch.yaml
- event-service-patch.yaml
- ai-service-patch.yaml
- content-service-patch.yaml
- distribution-service-patch.yaml
- participation-service-patch.yaml
- analytics-service-patch.yaml
# Override image tags for dev environment
images:
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service
newTag: dev

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: participation-service
spec:
replicas: 1
template:
spec:
containers:
- name: participation-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 1
template:
spec:
containers:
- name: user-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-service
spec:
replicas: 3
template:
spec:
containers:
- name: ai-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: analytics-service
spec:
replicas: 3
template:
spec:
containers:
- name: analytics-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: content-service
spec:
replicas: 3
template:
spec:
containers:
- name: content-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: distribution-service
spec:
replicas: 3
template:
spec:
containers:
- name: distribution-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service
spec:
replicas: 3
template:
spec:
containers:
- name: event-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -0,0 +1,38 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kt-event-marketing
bases:
- ../../base
# Environment-specific labels
commonLabels:
environment: prod
# Environment-specific patches
patchesStrategicMerge:
- user-service-patch.yaml
- event-service-patch.yaml
- ai-service-patch.yaml
- content-service-patch.yaml
- distribution-service-patch.yaml
- participation-service-patch.yaml
- analytics-service-patch.yaml
# Override image tags for prod environment
images:
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service
newTag: prod

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: participation-service
spec:
replicas: 3
template:
spec:
containers:
- name: participation-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 3
template:
spec:
containers:
- name: user-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-service
spec:
replicas: 2
template:
spec:
containers:
- name: ai-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: analytics-service
spec:
replicas: 2
template:
spec:
containers:
- name: analytics-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: content-service
spec:
replicas: 2
template:
spec:
containers:
- name: content-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: distribution-service
spec:
replicas: 2
template:
spec:
containers:
- name: distribution-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service
spec:
replicas: 2
template:
spec:
containers:
- name: event-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -0,0 +1,38 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kt-event-marketing
bases:
- ../../base
# Environment-specific labels
commonLabels:
environment: staging
# Environment-specific patches
patchesStrategicMerge:
- user-service-patch.yaml
- event-service-patch.yaml
- ai-service-patch.yaml
- content-service-patch.yaml
- distribution-service-patch.yaml
- participation-service-patch.yaml
- analytics-service-patch.yaml
# Override image tags for staging environment
images:
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service
newTag: staging

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: participation-service
spec:
replicas: 2
template:
spec:
containers:
- name: participation-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 2
template:
spec:
containers:
- name: user-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -0,0 +1,79 @@
#!/usr/bin/env python3
"""
Copy K8s manifests to Kustomize base directory and remove namespace declarations
"""
import os
import shutil
import yaml
from pathlib import Path
# Service names
SERVICES = [
'user-service',
'event-service',
'ai-service',
'content-service',
'distribution-service',
'participation-service',
'analytics-service'
]
# Base directories
SOURCE_DIR = Path('deployment/k8s')
BASE_DIR = Path('.github/kustomize/base')
def remove_namespace_from_yaml(content):
"""Remove namespace field from YAML content"""
docs = list(yaml.safe_load_all(content))
for doc in docs:
if doc and isinstance(doc, dict):
if 'metadata' in doc and 'namespace' in doc['metadata']:
del doc['metadata']['namespace']
return yaml.dump_all(docs, default_flow_style=False, sort_keys=False)
def copy_and_process_file(source_path, dest_path):
"""Copy file and remove namespace declaration"""
with open(source_path, 'r', encoding='utf-8') as f:
content = f.read()
# Remove namespace from YAML
processed_content = remove_namespace_from_yaml(content)
# Write to destination
dest_path.parent.mkdir(parents=True, exist_ok=True)
with open(dest_path, 'w', encoding='utf-8') as f:
f.write(processed_content)
print(f"✓ Copied and processed: {source_path} -> {dest_path}")
def main():
print("Starting manifest copy to Kustomize base...")
# Copy common resources
print("\n[Common Resources]")
common_dir = SOURCE_DIR / 'common'
for file in ['cm-common.yaml', 'secret-common.yaml', 'secret-imagepull.yaml', 'ingress.yaml']:
source = common_dir / file
if source.exists():
dest = BASE_DIR / file
copy_and_process_file(source, dest)
# Copy service-specific resources
print("\n[Service Resources]")
for service in SERVICES:
service_dir = SOURCE_DIR / service
if not service_dir.exists():
print(f"⚠ Service directory not found: {service_dir}")
continue
print(f"\nProcessing {service}...")
for file in service_dir.glob('*.yaml'):
dest = BASE_DIR / f"{service}-{file.name}"
copy_and_process_file(file, dest)
print("\n✅ All manifests copied to base directory!")
if __name__ == '__main__':
main()

181
.github/scripts/deploy.sh vendored Normal file
View File

@ -0,0 +1,181 @@
#!/bin/bash
set -e
###############################################################################
# Backend Services Deployment Script for AKS
#
# Usage:
# ./deploy.sh <environment> [service-name]
#
# Arguments:
# environment - Target environment (dev, staging, prod)
# service-name - Specific service to deploy (optional, deploys all if not specified)
#
# Examples:
# ./deploy.sh dev # Deploy all services to dev
# ./deploy.sh prod user-service # Deploy only user-service to prod
###############################################################################
# Color output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Functions
log_info() {
echo -e "${GREEN}[INFO]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Validate arguments
if [ $# -lt 1 ]; then
log_error "Usage: $0 <environment> [service-name]"
log_error "Environment must be one of: dev, staging, prod"
exit 1
fi
ENVIRONMENT=$1
SERVICE=${2:-all}
# Validate environment
if [[ ! "$ENVIRONMENT" =~ ^(dev|staging|prod)$ ]]; then
log_error "Invalid environment: $ENVIRONMENT"
log_error "Must be one of: dev, staging, prod"
exit 1
fi
# Load environment variables
ENV_FILE=".github/config/deploy_env_vars_${ENVIRONMENT}"
if [ ! -f "$ENV_FILE" ]; then
log_error "Environment file not found: $ENV_FILE"
exit 1
fi
source "$ENV_FILE"
log_info "Loaded environment configuration: $ENVIRONMENT"
# Service list
SERVICES=(
"user-service"
"event-service"
"ai-service"
"content-service"
"distribution-service"
"participation-service"
"analytics-service"
)
# Validate service if specified
if [ "$SERVICE" != "all" ]; then
if [[ ! " ${SERVICES[@]} " =~ " ${SERVICE} " ]]; then
log_error "Invalid service: $SERVICE"
log_error "Must be one of: ${SERVICES[*]}"
exit 1
fi
SERVICES=("$SERVICE")
fi
log_info "Services to deploy: ${SERVICES[*]}"
# Check prerequisites
log_info "Checking prerequisites..."
if ! command -v az &> /dev/null; then
log_error "Azure CLI not found. Please install Azure CLI."
exit 1
fi
if ! command -v kubectl &> /dev/null; then
log_error "kubectl not found. Please install kubectl."
exit 1
fi
if ! command -v kustomize &> /dev/null; then
log_error "kustomize not found. Please install kustomize."
exit 1
fi
# Azure login check
log_info "Checking Azure authentication..."
if ! az account show &> /dev/null; then
log_error "Not logged in to Azure. Please run 'az login'"
exit 1
fi
# Get AKS credentials
log_info "Getting AKS credentials..."
az aks get-credentials \
--resource-group "$RESOURCE_GROUP" \
--name "$AKS_CLUSTER" \
--overwrite-existing
# Check namespace
log_info "Checking namespace: $NAMESPACE"
if ! kubectl get namespace "$NAMESPACE" &> /dev/null; then
log_warn "Namespace $NAMESPACE does not exist. Creating..."
kubectl create namespace "$NAMESPACE"
fi
# Build and deploy with Kustomize
OVERLAY_DIR=".github/kustomize/overlays/${ENVIRONMENT}"
if [ ! -d "$OVERLAY_DIR" ]; then
log_error "Kustomize overlay directory not found: $OVERLAY_DIR"
exit 1
fi
log_info "Building Kustomize manifests for $ENVIRONMENT..."
cd "$OVERLAY_DIR"
# Update image tags
log_info "Updating image tags to: $ENVIRONMENT"
kustomize edit set image \
${ACR_NAME}.azurecr.io/kt-event-marketing/user-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/event-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/ai-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/content-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/distribution-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/participation-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/analytics-service:${ENVIRONMENT}
# Apply manifests
log_info "Applying manifests to AKS..."
kustomize build . | kubectl apply -f -
cd - > /dev/null
# Wait for deployments
log_info "Waiting for deployments to be ready..."
for service in "${SERVICES[@]}"; do
log_info "Waiting for $service deployment..."
if ! kubectl rollout status deployment/"$service" -n "$NAMESPACE" --timeout=5m; then
log_error "Deployment of $service failed!"
exit 1
fi
log_info "$service is ready"
done
# Verify deployment
log_info "Verifying deployment..."
echo ""
echo "=== Pods Status ==="
kubectl get pods -n "$NAMESPACE" -l app.kubernetes.io/part-of=kt-event-marketing
echo ""
echo "=== Services ==="
kubectl get svc -n "$NAMESPACE"
echo ""
echo "=== Ingress ==="
kubectl get ingress -n "$NAMESPACE"
log_info "Deployment completed successfully!"
log_info "Environment: $ENVIRONMENT"
log_info "Services: ${SERVICES[*]}"

51
.github/scripts/generate-patches.sh vendored Normal file
View File

@ -0,0 +1,51 @@
#!/bin/bash
SERVICES=(user-service event-service ai-service content-service distribution-service participation-service analytics-service)
# Staging patches (2 replicas, increased resources)
for service in "${SERVICES[@]}"; do
cat > ".github/kustomize/overlays/staging/${service}-patch.yaml" << YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${service}
spec:
replicas: 2
template:
spec:
containers:
- name: ${service}
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"
YAML
done
# Prod patches (3 replicas, maximum resources)
for service in "${SERVICES[@]}"; do
cat > ".github/kustomize/overlays/prod/${service}-patch.yaml" << YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${service}
spec:
replicas: 3
template:
spec:
containers:
- name: ${service}
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"
YAML
done
echo "✅ Generated all patch files for staging and prod"

207
.github/workflows/backend-cicd.yaml vendored Normal file
View File

@ -0,0 +1,207 @@
name: Backend CI/CD Pipeline
on:
push:
branches:
- develop
- main
paths:
- '*-service/**'
- '.github/workflows/backend-cicd.yaml'
- '.github/kustomize/**'
pull_request:
branches:
- develop
- main
paths:
- '*-service/**'
workflow_dispatch:
inputs:
environment:
description: 'Target environment'
required: true
type: choice
options:
- dev
- staging
- prod
service:
description: 'Service to deploy (all for all services)'
required: true
default: 'all'
env:
ACR_NAME: acrdigitalgarage01
RESOURCE_GROUP: rg-digitalgarage-01
AKS_CLUSTER: aks-digitalgarage-01
NAMESPACE: kt-event-marketing
JDK_VERSION: '21'
jobs:
detect-changes:
name: Detect Changed Services
runs-on: ubuntu-latest
outputs:
services: ${{ steps.detect.outputs.services }}
environment: ${{ steps.env.outputs.environment }}
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Determine environment
id: env
run: |
if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then
echo "environment=${{ github.event.inputs.environment }}" >> $GITHUB_OUTPUT
elif [ "${{ github.ref }}" = "refs/heads/main" ]; then
echo "environment=prod" >> $GITHUB_OUTPUT
elif [ "${{ github.ref }}" = "refs/heads/develop" ]; then
echo "environment=dev" >> $GITHUB_OUTPUT
else
echo "environment=dev" >> $GITHUB_OUTPUT
fi
- name: Detect changed services
id: detect
run: |
if [ "${{ github.event_name }}" = "workflow_dispatch" ] && [ "${{ github.event.inputs.service }}" != "all" ]; then
echo "services=[\"${{ github.event.inputs.service }}\"]" >> $GITHUB_OUTPUT
elif [ "${{ github.event_name }}" = "workflow_dispatch" ] && [ "${{ github.event.inputs.service }}" = "all" ]; then
echo "services=[\"user-service\",\"event-service\",\"ai-service\",\"content-service\",\"distribution-service\",\"participation-service\",\"analytics-service\"]" >> $GITHUB_OUTPUT
else
CHANGED_SERVICES=$(git diff --name-only ${{ github.event.before }} ${{ github.sha }} | \
grep -E '^(user|event|ai|content|distribution|participation|analytics)-service/' | \
cut -d'/' -f1 | sort -u | \
jq -R -s -c 'split("\n") | map(select(length > 0))')
if [ "$CHANGED_SERVICES" = "[]" ] || [ -z "$CHANGED_SERVICES" ]; then
echo "services=[\"user-service\",\"event-service\",\"ai-service\",\"content-service\",\"distribution-service\",\"participation-service\",\"analytics-service\"]" >> $GITHUB_OUTPUT
else
echo "services=$CHANGED_SERVICES" >> $GITHUB_OUTPUT
fi
fi
build-and-push:
name: Build and Push - ${{ matrix.service }}
needs: detect-changes
runs-on: ubuntu-latest
strategy:
matrix:
service: ${{ fromJson(needs.detect-changes.outputs.services) }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up JDK ${{ env.JDK_VERSION }}
uses: actions/setup-java@v4
with:
java-version: ${{ env.JDK_VERSION }}
distribution: 'temurin'
cache: 'gradle'
- name: Grant execute permission for gradlew
run: chmod +x gradlew
- name: Build with Gradle
run: ./gradlew ${{ matrix.service }}:build -x test
- name: Run tests
run: ./gradlew ${{ matrix.service }}:test
- name: Build JAR
run: ./gradlew ${{ matrix.service }}:bootJar
- name: Log in to Azure Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.ACR_NAME }}.azurecr.io
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: ./${{ matrix.service }}
file: ./${{ matrix.service }}/Dockerfile
push: true
tags: |
${{ env.ACR_NAME }}.azurecr.io/kt-event-marketing/${{ matrix.service }}:${{ needs.detect-changes.outputs.environment }}
${{ env.ACR_NAME }}.azurecr.io/kt-event-marketing/${{ matrix.service }}:${{ github.sha }}
${{ env.ACR_NAME }}.azurecr.io/kt-event-marketing/${{ matrix.service }}:latest
deploy:
name: Deploy to AKS - ${{ needs.detect-changes.outputs.environment }}
needs: [detect-changes, build-and-push]
runs-on: ubuntu-latest
environment: ${{ needs.detect-changes.outputs.environment }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Azure login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Get AKS credentials
run: |
az aks get-credentials \
--resource-group ${{ env.RESOURCE_GROUP }} \
--name ${{ env.AKS_CLUSTER }} \
--overwrite-existing
- name: Setup Kustomize
uses: imranismail/setup-kustomize@v2
- name: Deploy with Kustomize
run: |
cd .github/kustomize/overlays/${{ needs.detect-changes.outputs.environment }}
kustomize edit set image \
acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service:${{ needs.detect-changes.outputs.environment }}
kustomize build . | kubectl apply -f -
- name: Wait for deployment rollout
run: |
for service in $(echo '${{ needs.detect-changes.outputs.services }}' | jq -r '.[]'); do
echo "Waiting for ${service} deployment..."
kubectl rollout status deployment/${service} -n ${{ env.NAMESPACE }} --timeout=5m
done
- name: Verify deployment
run: |
echo "=== Pods Status ==="
kubectl get pods -n ${{ env.NAMESPACE }} -l app.kubernetes.io/part-of=kt-event-marketing
echo "=== Services ==="
kubectl get svc -n ${{ env.NAMESPACE }}
echo "=== Ingress ==="
kubectl get ingress -n ${{ env.NAMESPACE }}
notify:
name: Notify Deployment Result
needs: [detect-changes, deploy]
runs-on: ubuntu-latest
if: always()
steps:
- name: Deployment Success
if: needs.deploy.result == 'success'
run: |
echo "✅ Deployment to ${{ needs.detect-changes.outputs.environment }} succeeded!"
echo "Services: ${{ needs.detect-changes.outputs.services }}"
- name: Deployment Failure
if: needs.deploy.result == 'failure'
run: |
echo "❌ Deployment to ${{ needs.detect-changes.outputs.environment }} failed!"
echo "Services: ${{ needs.detect-changes.outputs.services }}"
exit 1

24
ai-service/Dockerfile Normal file
View File

@ -0,0 +1,24 @@
# Multi-stage build for Spring Boot application
FROM eclipse-temurin:21-jre-alpine AS builder
WORKDIR /app
COPY build/libs/*.jar app.jar
RUN java -Djarmode=layertools -jar app.jar extract
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
# Copy layers from builder
COPY --from=builder /app/dependencies/ ./
COPY --from=builder /app/spring-boot-loader/ ./
COPY --from=builder /app/snapshot-dependencies/ ./
COPY --from=builder /app/application/ ./
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=60s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8083/api/v1/ai-service/actuator/health || exit 1
ENTRYPOINT ["java", "org.springframework.boot.loader.launch.JarLauncher"]

View File

@ -37,7 +37,7 @@ spring:
server: server:
port: ${SERVER_PORT:8083} port: ${SERVER_PORT:8083}
servlet: servlet:
context-path: / context-path: /api/v1/ai-service
encoding: encoding:
charset: UTF-8 charset: UTF-8
enabled: true enabled: true

View File

@ -0,0 +1,24 @@
# Multi-stage build for Spring Boot application
FROM eclipse-temurin:21-jre-alpine AS builder
WORKDIR /app
COPY build/libs/*.jar app.jar
RUN java -Djarmode=layertools -jar app.jar extract
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
# Copy layers from builder
COPY --from=builder /app/dependencies/ ./
COPY --from=builder /app/spring-boot-loader/ ./
COPY --from=builder /app/snapshot-dependencies/ ./
COPY --from=builder /app/application/ ./
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=60s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8086/api/v1/analytics/actuator/health || exit 1
ENTRYPOINT ["java", "org.springframework.boot.loader.launch.JarLauncher"]

View File

@ -73,6 +73,8 @@ spring:
# Server # Server
server: server:
port: ${SERVER_PORT:8086} port: ${SERVER_PORT:8086}
servlet:
context-path: /api/v1/analytics
# JWT # JWT
jwt: jwt:

View File

@ -0,0 +1,770 @@
# 백엔드 GitHub Actions 파이프라인 작성 가이드
[요청사항]
- GitHub Actions 기반 CI/CD 파이프라인 구축 가이드 작성
- 환경별(dev/staging/prod) Kustomize 매니페스트 관리 및 자동 배포 구현
- SonarQube 코드 품질 분석과 Quality Gate 포함
- Kustomize 매니페스트 생성부터 배포까지 전체 과정 안내
- '[결과파일]'에 구축 방법 및 파이프라인 작성 가이드 생성
- 아래 작업은 실제 수행하여 파일 생성
- Kustomize 디렉토리 구조 생성
- Base Kustomization 작성
- 환경별 Overlay 작성
- 환경별 Patch 파일 생성
- GitHub Actions 워크플로우 파일 작성
- 환경별 배포 변수 파일 작성
- 수동 배포 스크립트 작성
[작업순서]
- 사전 준비사항 확인
프롬프트의 '[실행정보]'섹션에서 아래정보를 확인
- {ACR_NAME}: Azure Container Registry 이름
- {RESOURCE_GROUP}: Azure 리소스 그룹명
- {AKS_CLUSTER}: AKS 클러스터명
- {NAMESPACE}: Namespace명
예시)
```
[실행정보]
- ACR_NAME: acrdigitalgarage01
- RESOURCE_GROUP: rg-digitalgarage-01
- AKS_CLUSTER: aks-digitalgarage-01
- NAMESPACE: phonebill-dg0500
```
- 시스템명과 서비스명 확인
settings.gradle에서 확인.
- {SYSTEM_NAME}: rootProject.name
- {SERVICE_NAMES}: include 'common'하위의 include문 뒤의 값임
예시) include 'common'하위의 서비스명들.
```
rootProject.name = 'phonebill'
include 'common'
include 'api-gateway'
include 'user-service'
include 'order-service'
include 'payment-service'
```
- JDK버전 확인
루트 build.gradle에서 JDK 버전 확인.
{JDK_VERSION}: 'java' 섹션에서 JDK 버전 확인. 아래 예에서는 21임.
```
java {
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}
```
- GitHub 저장소 환경 구성 안내
- GitHub Repository Secrets 설정
- Azure 접근 인증정보 설정
```
# Azure Service Principal
Repository Settings > Secrets and variables > Actions > Repository secrets에 등록
AZURE_CREDENTIALS:
{
"clientId": "{클라이언트ID}",
"clientSecret": "{클라이언트시크릿}",
"subscriptionId": "{구독ID}",
"tenantId": "{테넌트ID}"
}
예시)
{
"clientId": "5e4b5b41-7208-48b7-b821-d6d5acf50ecf",
"clientSecret": "ldu8Q~GQEzFYU.dJX7_QsahR7n7C2xqkIM6hqbV8",
"subscriptionId": "2513dd36-7978-48e3-9a7c-b221d4874f66",
"tenantId": "4f0a3bfd-1156-4cce-8dc2-a049a13dba23",
}
```
- ACR Credentials
Credential 구하는 방법 안내
az acr credential show --name {acr 이름}
예) az acr credential show --name acrdigitalgarage01
```
ACR_USERNAME: {ACR_NAME}
ACR_PASSWORD: {ACR패스워드}
```
- SonarQube URL과 인증 토큰
SONAR_HOST_URL 구하는 방법과 SONAR_TOKEN 작성법 안내
SONAR_HOST_URL: 아래 명령 수행 후 http://{External IP}를 지정
k get svc -n sonarqube
예) http://20.249.187.69
SONAR_TOKEN 값은 아래와 같이 작성
- SonarQube 로그인 후 우측 상단 'Administrator' > My Account 클릭
- Security 탭 선택 후 토큰 생성
```
SONAR_TOKEN: {SonarQube토큰}
SONAR_HOST_URL: {SonarQube서버URL}
```
- Docker Hub (Rate Limit 해결용)
Docker Hub 패스워드 작성 방법 안내
- DockerHub(https://hub.docker.com)에 로그인
- 우측 상단 프로필 아이콘 클릭 후 Account Settings를 선택
- 좌측메뉴에서 'Personal Access Tokens' 클릭하여 생성
```
DOCKERHUB_USERNAME: {Docker Hub 사용자명}
DOCKERHUB_PASSWORD: {Docker Hub 패스워드}
```
- GitHub Repository Variables 설정
```
# Workflow 제어 변수
Repository Settings > Secrets and variables > Actions > Variables > Repository variables에 등록
ENVIRONMENT: dev (기본값, 수동실행시 선택 가능: dev/staging/prod)
SKIP_SONARQUBE: true (기본값, 수동실행시 선택 가능: true/false)
```
**사용 방법:**
- **자동 실행**: Push/PR 시 기본값 사용 (ENVIRONMENT=dev, SKIP_SONARQUBE=true)
- **수동 실행**: Actions 탭 > "Backend Services CI/CD" > "Run workflow" 버튼 클릭
- Environment: dev/staging/prod 선택
- Skip SonarQube Analysis: true/false 선택
- Kustomize 디렉토리 구조 생성
- GitHub Actions 전용 Kustomize 디렉토리 생성
```bash
mkdir -p .github/kustomize/{base,overlays/{dev,staging,prod}}
mkdir -p .github/kustomize/base/{common,{서비스명1},{서비스명2},...}
mkdir -p .github/{config,scripts}
```
- 기존 k8s 매니페스트를 base로 복사
```bash
# 기존 deployment/k8s/* 파일들을 base로 복사
cp deployment/k8s/common/* .github/kustomize/base/common/
cp deployment/k8s/{서비스명}/* .github/kustomize/base/{서비스명}/
# 네임스페이스 하드코딩 제거
find .github/kustomize/base -name "*.yaml" -exec sed -i 's/namespace: .*//' {} \;
```
- Base Kustomization 작성
`.github/kustomize/base/kustomization.yaml` 파일 생성
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: {SYSTEM_NAME}-base
resources:
# Common resources
- common/configmap-common.yaml
- common/secret-common.yaml
- common/secret-imagepull.yaml
- common/ingress.yaml
# 각 서비스별 리소스
- {SERVICE_NAME}/deployment.yaml
- {SERVICE_NAME}/service.yaml
- {SERVICE_NAME}/configmap.yaml
- {SERVICE_NAME}/secret.yaml
images:
- name: {ACR_NAME}.azurecr.io/{SYSTEM_NAME}/{SERVICE_NAME}
newTag: latest
```
- 환경별 Patch 파일 생성
각 환경별로 필요한 patch 파일들을 생성합니다.
**중요원칙**:
- **base 매니페스트에 없는 항목은 추가 안함**
- **base 매니페스트와 항목이 일치해야 함**
- Secret 매니페스트에 'data'가 아닌 'stringData'사용
**1. ConfigMap Common Patch 파일 생성**
`.github/kustomize/overlays/{ENVIRONMENT}/cm-common-patch.yaml`
- base 매니페스트를 환경별로 복사
```
cp .github/kustomize/base/common/cm-common.yaml .github/kustomize/overlays/{ENVIRONMENT}/cm-common-patch.yaml
```
- SPRING_PROFILES_ACTIVE를 환경에 맞게 설정 (dev/staging/prod)
- DDL_AUTO 설정: dev는 "update", staging/prod는 "validate"
- JWT 토큰 유효시간은 prod에서 보안을 위해 짧게 설정
**2. Secret Common Patch 파일 생성**
`.github/kustomize/overlays/{ENVIRONMENT}/secret-common-patch.yaml`
- base 매니페스트를 환경별로 복사
```
cp .github/kustomize/base/common/secret-common.yaml .github/kustomize/overlays/{ENVIRONMENT}/secret-common-patch.yaml
```
**3. Ingress Patch 파일 생성**
`.github/kustomize/overlays/{ENVIRONMENT}/ingress-patch.yaml`
- base의 ingress.yaml을 환경별로 오버라이드
- **⚠️ 중요**: 개발환경 Ingress Host의 기본값은 base의 ingress.yaml과 **정확히 동일하게**
- base에서 `host: {SYSTEM_NAME}-api.20.214.196.128.nip.io` 이면
- dev에서도 `host: {SYSTEM_NAME}-api.20.214.196.128.nip.io` 로 동일하게 설정
- **절대** `{SYSTEM_NAME}-dev-api.xxx` 처럼 변경하지 말 것
- Staging/Prod 환경별 도메인 설정: {SYSTEM_NAME}.도메인 형식
- service name을 '{서비스명}'으로 함.
- Staging/prod 환경은 HTTPS 강제 적용 및 SSL 인증서 설정
- staging/prod는 nginx.ingress.kubernetes.io/ssl-redirect: "true"
- dev는 nginx.ingress.kubernetes.io/ssl-redirect: "false"
**4. deployment Patch 파일 생성** ⚠️ **중요**
각 서비스별로 별도 파일 생성
`.github/kustomize/overlays/{ENVIRONMENT}/deployment-{SERVICE_NAME}-patch.yaml`
**필수 포함 사항:**
- ✅ **replicas 설정**: 각 서비스별 Deployment의 replica 수를 환경별로 설정
- dev: 모든 서비스 1 replica (리소스 절약)
- staging: 모든 서비스 2 replicas
- prod: 모든 서비스 3 replicas
- ✅ **resources 설정**: 각 서비스별 Deployment의 resources를 환경별로 설정
- dev: requests(256m CPU, 256Mi Memory), limits(1024m CPU, 1024Mi Memory)
- staging: requests(512m CPU, 512Mi Memory), limits(2048m CPU, 2048Mi Memory)
- prod: requests(1024m CPU, 1024Mi Memory), limits(4096m CPU, 4096Mi Memory)
**5. Secret Service Patch 파일 생성**
각 서비스별로 별도 파일 생성
`.github/kustomize/overlays/{ENVIRONMENT}/secret-{SERVICE_NAME}-patch.yaml`
- base 매니페스트를 환경별로 복사
```
cp .github/kustomize/base/{SERVICE_NAME}/secret-{SERVICE_NAME}.yaml .github/kustomize/overlays/{ENVIRONMENT}/secret-{SERVICE_NAME}-patch.yaml
```
- 환경별 데이터베이스 연결 정보로 수정
- **⚠️ 중요**: 패스워드 등 민감정보는 실제 환경 구축 시 별도 설정
- 환경별 Overlay 작성
각 환경별로 `overlays/{환경}/kustomization.yaml` 생성
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: {NAMESPACE}
resources:
- ../../base
patches:
- path: cm-common-patch.yaml
target:
kind: ConfigMap
name: cm-common
- path: deployment-{SERVICE_NAME}-patch.yaml
target:
kind: Deployment
name: {SERVICE_NAME}
- path: ingress-patch.yaml
target:
kind: Ingress
name: {SYSTEM_NAME}
- path: secret-common-patch.yaml
target:
kind: Secret
name: secret-common
- path: secret-{SERVICE_NAME}-patch.yaml
target:
kind: Secret
name: secret-{SERVICE_NAME}
images:
- name: {ACR_NAME}.azurecr.io/{SYSTEM_NAME}/{SERVICE_NAME}
newTag: {ENVIRONMENT}-latest
```
- GitHub Actions 워크플로우 작성
`.github/workflows/backend-cicd.yaml` 파일 생성 방법을 안내합니다.
주요 구성 요소:
- **Build & Test**: Gradle 기반 빌드 및 단위 테스트
- **SonarQube Analysis**: 코드 품질 분석 및 Quality Gate
- **Container Build & Push**: 환경별 이미지 태그로 빌드 및 푸시
- **Kustomize Deploy**: 환경별 매니페스트 적용
```yaml
name: Backend Services CI/CD
on:
push:
branches: [ main, develop ]
paths:
- '{서비스명1}/**'
- '{서비스명2}/**'
- '{서비스명3}/**'
- '{서비스명N}/**'
- 'common/**'
- '.github/**'
pull_request:
branches: [ main ]
workflow_dispatch:
inputs:
ENVIRONMENT:
description: 'Target environment'
required: true
default: 'dev'
type: choice
options:
- dev
- staging
- prod
SKIP_SONARQUBE:
description: 'Skip SonarQube Analysis'
required: false
default: 'true'
type: choice
options:
- 'true'
- 'false'
env:
REGISTRY: ${{ secrets.REGISTRY }}
IMAGE_ORG: ${{ secrets.IMAGE_ORG }}
RESOURCE_GROUP: ${{ secrets.RESOURCE_GROUP }}
AKS_CLUSTER: ${{ secrets.AKS_CLUSTER }}
jobs:
build:
name: Build and Test
runs-on: ubuntu-latest
outputs:
image_tag: ${{ steps.set_outputs.outputs.image_tag }}
environment: ${{ steps.set_outputs.outputs.environment }}
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Set up JDK {버전}
uses: actions/setup-java@v3
with:
java-version: '{JDK버전}'
distribution: 'temurin'
cache: 'gradle'
- name: Determine environment
id: determine_env
run: |
# Use input parameter or default to 'dev'
ENVIRONMENT="${{ github.event.inputs.ENVIRONMENT || 'dev' }}"
echo "environment=$ENVIRONMENT" >> $GITHUB_OUTPUT
- name: Load environment variables
id: env_vars
run: |
ENV=${{ steps.determine_env.outputs.environment }}
# Initialize variables with defaults
REGISTRY="{ACR_NAME}.azurecr.io"
IMAGE_ORG="{SYSTEM_NAME}"
RESOURCE_GROUP="{RESOURCE_GROUP}"
AKS_CLUSTER="{AKS_CLUSTER}"
NAMESPACE="{NAMESPACE}"
# Read environment variables from .github/config file
if [[ -f ".github/config/deploy_env_vars_${ENV}" ]]; then
while IFS= read -r line || [[ -n "$line" ]]; do
# Skip comments and empty lines
[[ "$line" =~ ^#.*$ ]] && continue
[[ -z "$line" ]] && continue
# Extract key-value pairs
key=$(echo "$line" | cut -d '=' -f1)
value=$(echo "$line" | cut -d '=' -f2-)
# Override defaults if found in config
case "$key" in
"resource_group") RESOURCE_GROUP="$value" ;;
"cluster_name") AKS_CLUSTER="$value" ;;
esac
done < ".github/config/deploy_env_vars_${ENV}"
fi
# Export for other jobs
echo "REGISTRY=$REGISTRY" >> $GITHUB_ENV
echo "IMAGE_ORG=$IMAGE_ORG" >> $GITHUB_ENV
echo "RESOURCE_GROUP=$RESOURCE_GROUP" >> $GITHUB_ENV
echo "AKS_CLUSTER=$AKS_CLUSTER" >> $GITHUB_ENV
- name: Grant execute permission for gradlew
run: chmod +x gradlew
- name: Build with Gradle
run: |
./gradlew build -x test
- name: SonarQube Analysis & Quality Gate
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
run: |
# Check if SonarQube should be skipped
SKIP_SONARQUBE="${{ github.event.inputs.SKIP_SONARQUBE || 'true' }}"
if [[ "$SKIP_SONARQUBE" == "true" ]]; then
echo "⏭️ Skipping SonarQube Analysis (SKIP_SONARQUBE=$SKIP_SONARQUBE)"
exit 0
fi
# Define services array
services=({SERVICE_NAME1} {SERVICE_NAME2} {SERVICE_NAME3} {SERVICE_NAMEN})
# Run tests, coverage reports, and SonarQube analysis for each service
for service in "${services[@]}"; do
./gradlew :$service:test :$service:jacocoTestReport :$service:sonar \
-Dsonar.projectKey={SYSTEM_NAME}-$service-${{ steps.determine_env.outputs.environment }} \
-Dsonar.projectName={SYSTEM_NAME}-$service-${{ steps.determine_env.outputs.environment }} \
-Dsonar.host.url=$SONAR_HOST_URL \
-Dsonar.token=$SONAR_TOKEN \
-Dsonar.java.binaries=build/classes/java/main \
-Dsonar.coverage.jacoco.xmlReportPaths=build/reports/jacoco/test/jacocoTestReport.xml \
-Dsonar.exclusions=**/config/**,**/entity/**,**/dto/**,**/*Application.class,**/exception/**
done
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: app-builds
path: |
{SERVICE_NAME1}/build/libs/*.jar
{SERVICE_NAME2}/build/libs/*.jar
{SERVICE_NAME3}/build/libs/*.jar
{SERVICE_NAMEN}/build/libs/*.jar
- name: Set outputs
id: set_outputs
run: |
# Generate timestamp for image tag
IMAGE_TAG=$(date +%Y%m%d%H%M%S)
echo "image_tag=$IMAGE_TAG" >> $GITHUB_OUTPUT
echo "environment=${{ steps.determine_env.outputs.environment }}" >> $GITHUB_OUTPUT
release:
name: Build and Push Docker Images
needs: build
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Download build artifacts
uses: actions/download-artifact@v4
with:
name: app-builds
- name: Set environment variables from build job
run: |
echo "REGISTRY=${{ needs.build.outputs.registry }}" >> $GITHUB_ENV
echo "IMAGE_ORG=${{ needs.build.outputs.image_org }}" >> $GITHUB_ENV
echo "ENVIRONMENT=${{ needs.build.outputs.environment }}" >> $GITHUB_ENV
echo "IMAGE_TAG=${{ needs.build.outputs.image_tag }}" >> $GITHUB_ENV
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub (prevent rate limit)
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Login to Azure Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWORD }}
- name: Build and push Docker images for all services
run: |
# Define services array
services=({SERVICE_NAME1} {SERVICE_NAME2} {SERVICE_NAME3} {SERVICE_NAMEN})
# Build and push each service image
for service in "${services[@]}"; do
echo "Building and pushing $service..."
docker build \
--build-arg BUILD_LIB_DIR="$service/build/libs" \
--build-arg ARTIFACTORY_FILE="$service.jar" \
-f deployment/container/Dockerfile-backend \
-t ${{ env.REGISTRY }}/${{ env.IMAGE_ORG }}/$service:${{ needs.build.outputs.environment }}-${{ needs.build.outputs.image_tag }} .
docker push ${{ env.REGISTRY }}/${{ env.IMAGE_ORG }}/$service:${{ needs.build.outputs.environment }}-${{ needs.build.outputs.image_tag }}
done
deploy:
name: Deploy to Kubernetes
needs: [build, release]
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Set image tag environment variable
run: |
echo "IMAGE_TAG=${{ needs.build.outputs.image_tag }}" >> $GITHUB_ENV
echo "ENVIRONMENT=${{ needs.build.outputs.environment }}" >> $GITHUB_ENV
- name: Install Azure CLI
run: |
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
- name: Azure Login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Setup kubectl
uses: azure/setup-kubectl@v3
- name: Get AKS Credentials
run: |
az aks get-credentials --resource-group ${{ env.RESOURCE_GROUP }} --name ${{ env.AKS_CLUSTER }} --overwrite-existing
- name: Create namespace
run: |
kubectl create namespace ${{ env.NAMESPACE }} --dry-run=client -o yaml | kubectl apply -f -
- name: Install Kustomize
run: |
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
sudo mv kustomize /usr/local/bin/
- name: Update Kustomize images and deploy
run: |
# 환경별 디렉토리로 이동
cd deployment/cicd/kustomize/overlays/${{ env.ENVIRONMENT }}
# 각 서비스별 이미지 태그 업데이트
kustomize edit set image ${{ env.REGISTRY }}/${{ env.IMAGE_ORG }}/api-gateway:${{ env.ENVIRONMENT }}-${{ env.IMAGE_TAG }}
kustomize edit set image ${{ env.REGISTRY }}/${{ env.IMAGE_ORG }}/user-service:${{ env.ENVIRONMENT }}-${{ env.IMAGE_TAG }}
kustomize edit set image ${{ env.REGISTRY }}/${{ env.IMAGE_ORG }}/bill-service:${{ env.ENVIRONMENT }}-${{ env.IMAGE_TAG }}
kustomize edit set image ${{ env.REGISTRY }}/${{ env.IMAGE_ORG }}/product-service:${{ env.ENVIRONMENT }}-${{ env.IMAGE_TAG }}
kustomize edit set image ${{ env.REGISTRY }}/${{ env.IMAGE_ORG }}/kos-mock:${{ env.ENVIRONMENT }}-${{ env.IMAGE_TAG }}
# 매니페스트 적용
kubectl apply -k .
- name: Wait for deployments to be ready
run: |
echo "Waiting for deployments to be ready..."
kubectl -n ${{ env.NAMESPACE }} wait --for=condition=available deployment/${{ env.ENVIRONMENT }}-api-gateway --timeout=300s
kubectl -n ${{ env.NAMESPACE }} wait --for=condition=available deployment/${{ env.ENVIRONMENT }}-user-service --timeout=300s
kubectl -n ${{ env.NAMESPACE }} wait --for=condition=available deployment/${{ env.ENVIRONMENT }}-bill-service --timeout=300s
kubectl -n ${{ env.NAMESPACE }} wait --for=condition=available deployment/${{ env.ENVIRONMENT }}-product-service --timeout=300s
kubectl -n ${{ env.NAMESPACE }} wait --for=condition=available deployment/${{ env.ENVIRONMENT }}-kos-mock --timeout=300s
```
- GitHub Actions 전용 환경별 설정 파일 작성
`.github/config/deploy_env_vars_{환경}` 파일 생성 방법
**.github/config/deploy_env_vars_dev**
```bash
# dev Environment Configuration
resource_group={RESOURCE_GROUP}
cluster_name={AKS_CLUSTER}
```
**.github/config/deploy_env_vars_staging**
```bash
# staging Environment Configuration
resource_group={RESOURCE_GROUP}
cluster_name={AKS_CLUSTER}
```
**.github/config/deploy_env_vars_prod**
```bash
# prod Environment Configuration
resource_group={RESOURCE_GROUP}
cluster_name={AKS_CLUSTER}
```
**참고**: Kustomize 방식에서는 namespace, replicas, resources 등은 kustomization.yaml과 patch 파일에서 관리됩니다.
- GitHub Actions 전용 수동 배포 스크립트 작성
`.github/scripts/deploy-actions.sh` 파일 생성:
```bash
#!/bin/bash
set -e
ENVIRONMENT=${1:-dev}
IMAGE_TAG=${2:-latest}
echo "🚀 Manual deployment starting..."
echo "Environment: $ENVIRONMENT"
echo "Image Tag: $IMAGE_TAG"
# Check if kustomize is installed
if ! command -v kustomize &> /dev/null; then
echo "Installing Kustomize..."
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
sudo mv kustomize /usr/local/bin/
fi
# Load environment variables from .github/config
if [[ -f ".github/config/deploy_env_vars_${ENVIRONMENT}" ]]; then
source ".github/config/deploy_env_vars_${ENVIRONMENT}"
echo "✅ Environment variables loaded for $ENVIRONMENT"
else
echo "❌ Environment configuration file not found: .github/config/deploy_env_vars_${ENVIRONMENT}"
exit 1
fi
# Create namespace
echo "📝 Creating namespace {NAMESPACE}..."
kubectl create namespace {NAMESPACE} --dry-run=client -o yaml | kubectl apply -f -
# 환경별 이미지 태그 업데이트 (.github/kustomize 사용)
cd .github/kustomize/overlays/${ENVIRONMENT}
echo "🔄 Updating image tags..."
# 서비스 배열 정의
services=({SERVICE_NAME1} {SERVICE_NAME2} {SERVICE_NAME3} {SERVICE_NAMEN})
# 각 서비스별 이미지 태그 업데이트
for service in "${services[@]}"; do
kustomize edit set image {ACR_NAME}.azurecr.io/{SYSTEM_NAME}/$service:${ENVIRONMENT}-${IMAGE_TAG}
done
echo "🚀 Deploying to Kubernetes..."
# 배포 실행
kubectl apply -k .
echo "⏳ Waiting for deployments to be ready..."
# 서비스별 배포 상태 확인
for service in "${services[@]}"; do
kubectl rollout status deployment/${ENVIRONMENT}-$service -n {NAMESPACE} --timeout=300s
done
echo "🔍 Health check..."
# API Gateway Health Check (첫 번째 서비스가 API Gateway라고 가정)
GATEWAY_SERVICE=${services[0]}
GATEWAY_POD=$(kubectl get pod -n {NAMESPACE} -l app.kubernetes.io/name=${ENVIRONMENT}-$GATEWAY_SERVICE -o jsonpath='{.items[0].metadata.name}')
kubectl -n {NAMESPACE} exec $GATEWAY_POD -- curl -f http://localhost:8080/actuator/health || echo "Health check failed, but deployment completed"
echo "📋 Service Information:"
kubectl get pods -n {NAMESPACE}
kubectl get services -n {NAMESPACE}
kubectl get ingress -n {NAMESPACE}
echo "✅ GitHub Actions deployment completed successfully!"
```
- SonarQube 프로젝트 설정 방법 작성
- SonarQube에서 각 서비스별 프로젝트 생성
- Quality Gate 설정:
```
Coverage: >= 80%
Duplicated Lines: <= 3%
Maintainability Rating: <= A
Reliability Rating: <= A
Security Rating: <= A
```
- 롤백 방법 작성
- GitHub Actions에서 이전 버전으로 롤백:
```bash
# 이전 워크플로우 실행으로 롤백
1. GitHub > Actions > 성공한 이전 워크플로우 선택
2. Re-run all jobs 클릭
```
- kubectl을 이용한 롤백:
```bash
# 특정 버전으로 롤백
kubectl rollout undo deployment/{환경}-{서비스명} -n {NAMESPACE} --to-revision=2
# 롤백 상태 확인
kubectl rollout status deployment/{환경}-{서비스명} -n {NAMESPACE}
```
- 수동 스크립트를 이용한 롤백:
```bash
# 이전 안정 버전 이미지 태그로 배포
./deployment/cicd/scripts/deploy-actions.sh {환경} {이전태그}
```
[체크리스트]
GitHub Actions CI/CD 파이프라인 구축 작업을 누락 없이 진행하기 위한 체크리스트입니다.
## 📋 사전 준비 체크리스트
- [ ] settings.gradle에서 시스템명과 서비스명 확인 완료
- [ ] 실행정보 섹션에서 ACR명, 리소스 그룹, AKS 클러스터명 확인 완료
## 📂 GitHub Actions 전용 Kustomize 구조 생성 체크리스트
- [ ] 디렉토리 구조 생성: `.github/kustomize/{base,overlays/{dev,staging,prod}}`
- [ ] 서비스별 base 디렉토리 생성: `.github/kustomize/base/{common,{서비스명들}}`
- [ ] 기존 k8s 매니페스트를 base로 복사 완료
- [ ] **리소스 누락 방지 검증 완료**:
- [ ] `ls .github/kustomize/base/*/` 명령으로 모든 서비스 디렉토리의 파일 확인
- [ ] 각 서비스별 필수 파일 존재 확인 (deployment.yaml, service.yaml 필수)
- [ ] ConfigMap 파일 존재 시 `cm-{서비스명}.yaml` 명명 규칙 준수 확인
- [ ] Secret 파일 존재 시 `secret-{서비스명}.yaml` 명명 규칙 준수 확인
- [ ] Base kustomization.yaml 파일 생성 완료
- [ ] 모든 서비스의 deployment.yaml, service.yaml 포함 확인
- [ ] 존재하는 모든 ConfigMap 파일 포함 확인 (`cm-{서비스명}.yaml`)
- [ ] 존재하는 모든 Secret 파일 포함 확인 (`secret-{서비스명}.yaml`)
- [ ] **검증 명령어 실행 완료**:
- [ ] `kubectl kustomize .github/kustomize/base/` 정상 실행 확인
- [ ] 에러 메시지 없이 모든 리소스 출력 확인
## 🔧 GitHub Actions 전용 환경별 Overlay 구성 체크리스트
### 중요 체크 사항
- Base Kustomization에서 존재하지 않는 Secret 파일들 제거
### 공통 체크 사항
- **base 매니페스트에 없는 항목을 추가하지 않았는지 체크**
- **base 매니페스트와 항목이 일치 하는지 체크**
- Secret 매니페스트에 'data'가 아닌 'stringData'사용했는지 체크
- **⚠️ Kustomize patch 방법 변경**: `patchesStrategicMerge``patches` (target 명시)
### DEV 환경
- [ ] `.github/kustomize/overlays/dev/kustomization.yaml` 생성 완료
- [ ] `.github/kustomize/overlays/dev/cm-common-patch.yaml` 생성 완료 (dev 프로파일, update DDL)
- [ ] `.github/kustomize/overlays/dev/secret-common-patch.yaml` 생성 완료
- [ ] `.github/kustomize/overlays/dev/ingress-patch.yaml` 생성 완료 (**Host 기본값은 base의 ingress.yaml과 동일**)
- [ ] `.github/kustomize/overlays/dev/deployment-{서비스명}-patch.yaml` 생성 완료 (replicas, resources 지정)
- [ ] 각 서비스별 `.github/kustomize/overlays/dev/secret-{서비스명}-patch.yaml` 생성 완료
### STAGING 환경
- [ ] `.github/kustomize/overlays/staging/kustomization.yaml` 생성 완료
- [ ] `.github/kustomize/overlays/staging/cm-common-patch.yaml` 생성 완료 (staging 프로파일, validate DDL)
- [ ] `.github/kustomize/overlays/staging/secret-common-patch.yaml` 생성 완료
- [ ] `.github/kustomize/overlays/staging/ingress-patch.yaml` 생성 완료 (prod 도메인, HTTPS, SSL 인증서)
- [ ] `.github/kustomize/overlays/staging/deployment-{서비스명}-patch.yaml` 생성 완료 (replicas, resources 지정)
- [ ] 각 서비스별 `.github/kustomize/overlays/staging/secret-{서비스명}-patch.yaml` 생성 완료
### PROD 환경
- [ ] `.github/kustomize/overlays/prod/kustomization.yaml` 생성 완료
- [ ] `.github/kustomize/overlays/prod/cm-common-patch.yaml` 생성 완료 (prod 프로파일, validate DDL, 짧은 JWT)
- [ ] `.github/kustomize/overlays/prod/secret-common-patch.yaml` 생성 완료
- [ ] `.github/kustomize/overlays/prod/ingress-patch.yaml` 생성 완료 (prod 도메인, HTTPS, SSL 인증서)
- [ ] `.github/kustomize/overlays/prod/deployment-{서비스명}-patch.yaml` 생성 완료 (replicas, resources 지정)
- [ ] 각 서비스별 `.github/kustomize/overlays/prod/secret-{서비스명}-patch.yaml` 생성 완료
## ⚙️ GitHub Actions 설정 및 스크립트 체크리스트
- [ ] 환경별 설정 파일 생성: `.github/config/deploy_env_vars_{dev,staging,prod}`
- [ ] GitHub Actions 워크플로우 파일 `.github/workflows/backend-cicd.yaml` 생성 완료
- [ ] 워크플로우 주요 내용 확인
- Build, SonarQube, Docker Build & Push, Deploy 단계 포함
- JDK 버전 확인: `java-version: '{JDK버전}'`
- 변수 참조 문법 확인: `${{ needs.build.outputs.* }}` 사용
- 모든 서비스명이 실제 프로젝트 서비스명으로 치환되었는지 확인
- **환경 변수 SKIP_SONARQUBE 처리 확인**: 기본값 'true', 조건부 실행
- **플레이스홀더 사용 확인**: {ACR_NAME}, {SYSTEM_NAME}, {SERVICE_NAME} 등
- [ ] 수동 배포 스크립트 `.github/scripts/deploy-actions.sh` 생성 완료
- [ ] 스크립트 실행 권한 설정 완료 (`chmod +x .github/scripts/*.sh`)
[결과파일]
- 가이드: .github/actions-pipeline-guide.md
- GitHub Actions 워크플로우: .github/workflows/backend-cicd.yaml
- GitHub Actions 전용 Kustomize 매니페스트: .github/kustomize/*
- GitHub Actions 전용 환경별 설정 파일: .github/config/*
- GitHub Actions 전용 수동배포 스크립트: .github/scripts/deploy-actions.sh

View File

@ -0,0 +1,582 @@
# KT Event Marketing - Backend CI/CD Guide
## 목차
1. [개요](#개요)
2. [아키텍처](#아키텍처)
3. [사전 요구사항](#사전-요구사항)
4. [GitHub Secrets 설정](#github-secrets-설정)
5. [배포 환경](#배포-환경)
6. [파이프라인 구조](#파이프라인-구조)
7. [사용 방법](#사용-방법)
8. [트러블슈팅](#트러블슈팅)
---
## 개요
이 문서는 KT Event Marketing 백엔드 마이크로서비스의 CI/CD 파이프라인 구축 및 운영 가이드입니다.
### 시스템 정보
- **시스템명**: kt-event-marketing
- **JDK 버전**: 21
- **빌드 도구**: Gradle
- **컨테이너 레지스트리**: Azure Container Registry (ACR)
- **배포 대상**: Azure Kubernetes Service (AKS)
### 서비스 목록
1. user-service (8081)
2. event-service (8082)
3. ai-service (8083)
4. content-service (8084)
5. distribution-service (8085)
6. participation-service (8086)
7. analytics-service (8087)
---
## 아키텍처
### CI/CD 파이프라인 흐름
```
┌─────────────────┐
│ Code Push │
│ (GitHub) │
└────────┬────────┘
┌─────────────────────────────────────────────┐
│ GitHub Actions Workflow │
│ │
│ 1. Detect Changed Services │
│ 2. Build & Test (Gradle) │
│ 3. Build Docker Image │
│ 4. Push to ACR │
│ 5. Deploy to AKS (Kustomize) │
│ 6. Verify Deployment │
└─────────────────────────────────────────────┘
┌─────────────────┐
│ AKS Cluster │
│ (Running Pods) │
└─────────────────┘
```
### Kustomize 디렉토리 구조
```
.github/
├── kustomize/
│ ├── base/ # 기본 매니페스트
│ │ ├── kustomization.yaml
│ │ ├── cm-common.yaml
│ │ ├── secret-common.yaml
│ │ ├── ingress.yaml
│ │ └── *-service-*.yaml # 각 서비스별 리소스
│ └── overlays/ # 환경별 설정
│ ├── dev/
│ │ ├── kustomization.yaml
│ │ └── *-service-patch.yaml # Dev 환경 패치
│ ├── staging/
│ │ ├── kustomization.yaml
│ │ └── *-service-patch.yaml # Staging 환경 패치
│ └── prod/
│ ├── kustomization.yaml
│ └── *-service-patch.yaml # Prod 환경 패치
├── workflows/
│ └── backend-cicd.yaml # GitHub Actions 워크플로우
├── config/
│ ├── deploy_env_vars_dev # Dev 환경 변수
│ ├── deploy_env_vars_staging # Staging 환경 변수
│ └── deploy_env_vars_prod # Prod 환경 변수
└── scripts/
└── deploy.sh # 수동 배포 스크립트
```
---
## 사전 요구사항
### 1. Azure 리소스
- **Azure Container Registry (ACR)**
- 이름: acrdigitalgarage01
- SKU: Standard 이상
- **Azure Kubernetes Service (AKS)**
- 클러스터명: aks-digitalgarage-01
- 리소스그룹: rg-digitalgarage-01
- Kubernetes 버전: 1.28 이상
### 2. GitHub Repository Secrets
다음 Secrets를 GitHub Repository에 등록해야 합니다:
- `ACR_USERNAME`: ACR 사용자명
- `ACR_PASSWORD`: ACR 패스워드
- `AZURE_CREDENTIALS`: Azure Service Principal JSON
### 3. 로컬 개발 환경 (수동 배포 시)
- Azure CLI 2.50 이상
- kubectl 1.28 이상
- kustomize 5.0 이상
- JDK 21
---
## GitHub Secrets 설정
### 1. Azure Service Principal 생성
```bash
# Azure 로그인
az login
# Service Principal 생성
az ad sp create-for-rbac \
--name "github-actions-kt-event-marketing" \
--role contributor \
--scopes /subscriptions/{subscription-id}/resourceGroups/rg-digitalgarage-01 \
--sdk-auth
# 출력된 JSON을 AZURE_CREDENTIALS Secret에 등록
```
### 2. ACR 자격증명 가져오기
```bash
# ACR 사용자명 확인
az acr credential show --name acrdigitalgarage01 --query username
# ACR 패스워드 확인
az acr credential show --name acrdigitalgarage01 --query passwords[0].value
```
### 3. GitHub Secrets 등록
GitHub Repository → Settings → Secrets and variables → Actions → New repository secret
```
Name: ACR_USERNAME
Value: [ACR 사용자명]
Name: ACR_PASSWORD
Value: [ACR 패스워드]
Name: AZURE_CREDENTIALS
Value: [Service Principal JSON]
```
---
## 배포 환경
### Dev 환경
- **브랜치**: develop
- **이미지 태그**: dev
- **네임스페이스**: kt-event-marketing
- **리소스**:
- Replicas: 1
- CPU Request: 256m, Limit: 1024m
- Memory Request: 256Mi, Limit: 1024Mi
### Staging 환경
- **브랜치**: staging (Manual workflow dispatch)
- **이미지 태그**: staging
- **네임스페이스**: kt-event-marketing
- **리소스**:
- Replicas: 2
- CPU Request: 512m, Limit: 2048m
- Memory Request: 512Mi, Limit: 2048Mi
### Prod 환경
- **브랜치**: main
- **이미지 태그**: prod
- **네임스페이스**: kt-event-marketing
- **리소스**:
- Replicas: 3
- CPU Request: 1024m, Limit: 4096m
- Memory Request: 1024Mi, Limit: 4096Mi
---
## 파이프라인 구조
### Job 1: detect-changes
변경된 서비스를 감지하고 배포 환경을 결정합니다.
**Output**:
- `services`: 배포할 서비스 목록 (JSON 배열)
- `environment`: 배포 대상 환경 (dev/staging/prod)
**로직**:
- Workflow dispatch: 사용자가 지정한 서비스와 환경
- Push to main: 모든 서비스를 prod에 배포
- Push to develop: 변경된 서비스를 dev에 배포
### Job 2: build-and-push
각 서비스를 병렬로 빌드하고 ACR에 푸시합니다.
**단계**:
1. 코드 체크아웃
2. JDK 21 설정
3. Gradle 빌드 (테스트 제외)
4. 단위 테스트 실행
5. bootJar 빌드
6. ACR 로그인
7. Docker 이미지 빌드 및 푸시
**생성되는 이미지 태그**:
- `{environment}`: 환경별 태그 (dev/staging/prod)
- `{git-sha}`: Git 커밋 해시
- `latest`: 최신 이미지
### Job 3: deploy
Kustomize를 사용하여 AKS에 배포합니다.
**단계**:
1. Azure 로그인
2. AKS 자격증명 가져오기
3. Kustomize 설치
4. 이미지 태그 업데이트
5. Kustomize 빌드 및 적용
6. Deployment 롤아웃 대기
7. 배포 검증
### Job 4: notify
배포 결과를 알립니다.
---
## 사용 방법
### 1. 자동 배포 (Push)
#### Dev 환경 배포
```bash
git checkout develop
git add .
git commit -m "feat: 새로운 기능 추가"
git push origin develop
```
#### Prod 환경 배포
```bash
git checkout main
git merge develop
git push origin main
```
### 2. 수동 배포 (Workflow Dispatch)
GitHub Actions 웹 UI에서:
1. Actions 탭 클릭
2. "Backend CI/CD Pipeline" 워크플로우 선택
3. "Run workflow" 클릭
4. 환경과 서비스 선택:
- Environment: dev/staging/prod
- Service: all 또는 특정 서비스명
### 3. 로컬에서 수동 배포
```bash
# Dev 환경에 모든 서비스 배포
./.github/scripts/deploy.sh dev
# Prod 환경에 특정 서비스만 배포
./.github/scripts/deploy.sh prod user-service
# 스크립트 도움말
./.github/scripts/deploy.sh
```
### 4. 단일 서비스 배포
특정 서비스만 배포하려면:
```bash
# GitHub Actions UI에서 Workflow Dispatch 사용
# 또는 로컬 스크립트 사용
./.github/scripts/deploy.sh dev user-service
```
---
## 배포 검증
### 1. Pod 상태 확인
```bash
kubectl get pods -n kt-event-marketing
```
모든 Pod가 `Running` 상태이고 `READY` 컬럼이 `1/1`이어야 합니다.
### 2. Service 확인
```bash
kubectl get svc -n kt-event-marketing
```
### 3. Ingress 확인
```bash
kubectl get ingress -n kt-event-marketing
```
### 4. 로그 확인
```bash
# 특정 Pod의 로그
kubectl logs -n kt-event-marketing <pod-name>
# 최근 로그 스트리밍
kubectl logs -n kt-event-marketing <pod-name> -f
# 이전 컨테이너 로그
kubectl logs -n kt-event-marketing <pod-name> --previous
```
### 5. 애플리케이션 헬스 체크
```bash
# Ingress를 통한 헬스 체크
curl http://kt-event-marketing-api.20.214.196.128.nip.io/api/v1/users/actuator/health
# 각 서비스별 엔드포인트
curl http://kt-event-marketing-api.20.214.196.128.nip.io/api/v1/events/actuator/health
curl http://kt-event-marketing-api.20.214.196.128.nip.io/api/v1/content/actuator/health
curl http://kt-event-marketing-api.20.214.196.128.nip.io/api/v1/ai-service/actuator/health
curl http://kt-event-marketing-api.20.214.196.128.nip.io/distribution/actuator/health
curl http://kt-event-marketing-api.20.214.196.128.nip.io/api/v1/participations/actuator/health
```
---
## 트러블슈팅
### 문제 1: ImagePullBackOff
**증상**:
```
kubectl get pods -n kt-event-marketing
NAME READY STATUS RESTARTS AGE
user-service-xxx 0/1 ImagePullBackOff 0 2m
```
**원인**:
- ACR 인증 실패
- 이미지가 ACR에 존재하지 않음
**해결**:
```bash
# Secret 확인
kubectl get secret kt-event-marketing -n kt-event-marketing
# Secret 재생성
kubectl create secret docker-registry kt-event-marketing \
--docker-server=acrdigitalgarage01.azurecr.io \
--docker-username=<ACR_USERNAME> \
--docker-password=<ACR_PASSWORD> \
-n kt-event-marketing \
--dry-run=client -o yaml | kubectl apply -f -
# ACR에 이미지 존재 확인
az acr repository list --name acrdigitalgarage01 --output table
az acr repository show-tags --name acrdigitalgarage01 --repository kt-event-marketing/user-service
```
### 문제 2: CrashLoopBackOff
**증상**:
```
kubectl get pods -n kt-event-marketing
NAME READY STATUS RESTARTS AGE
user-service-xxx 0/1 CrashLoopBackOff 5 5m
```
**원인**:
- 애플리케이션 시작 실패
- 환경 변수 오류
- 데이터베이스 연결 실패
**해결**:
```bash
# Pod 로그 확인
kubectl logs -n kt-event-marketing user-service-xxx
# 이전 컨테이너 로그 확인
kubectl logs -n kt-event-marketing user-service-xxx --previous
# ConfigMap 확인
kubectl get cm -n kt-event-marketing cm-common -o yaml
kubectl get cm -n kt-event-marketing cm-user-service -o yaml
# Secret 확인 (값은 base64 인코딩)
kubectl get secret -n kt-event-marketing secret-common -o yaml
# Pod describe로 상세 정보 확인
kubectl describe pod -n kt-event-marketing user-service-xxx
```
### 문제 3: Readiness Probe Failed
**증상**:
```
kubectl get pods -n kt-event-marketing
NAME READY STATUS RESTARTS AGE
content-service-xxx 0/1 Running 0 3m
```
Pod는 Running이지만 READY가 0/1입니다.
**원인**:
- Actuator 엔드포인트 경로 오류
- 애플리케이션 Context Path 설정 문제
**해결**:
```bash
# Pod 이벤트 확인
kubectl describe pod -n kt-event-marketing content-service-xxx
# 수동으로 헬스 체크 테스트
kubectl exec -n kt-event-marketing content-service-xxx -- \
curl -f http://localhost:8084/api/v1/content/actuator/health
# Deployment의 probe 설정 확인 및 수정
kubectl edit deployment content-service -n kt-event-marketing
```
### 문제 4: Database Connection Failed
**증상**:
로그에서 데이터베이스 연결 오류 발생
**해결**:
```bash
# PostgreSQL Pod 확인
kubectl get pods -n kt-event-marketing | grep postgres
# PostgreSQL 연결 테스트
kubectl exec -it distribution-postgresql-0 -n kt-event-marketing -- \
psql -U eventuser -d postgres -c "\l"
# 데이터베이스 생성
kubectl exec distribution-postgresql-0 -n kt-event-marketing -- \
bash -c "PGPASSWORD=Hi5Jessica! psql -U eventuser -d postgres -c 'CREATE DATABASE analytics_db;'"
# ConfigMap과 Secret의 DB 설정 확인
kubectl get cm cm-analytics-service -n kt-event-marketing -o yaml
kubectl get secret secret-analytics-service -n kt-event-marketing -o yaml
```
### 문제 5: Workflow 실패
**증상**:
GitHub Actions 워크플로우가 실패합니다.
**해결**:
1. **Build 단계 실패**:
```bash
# 로컬에서 빌드 테스트
./gradlew clean build
# 특정 서비스만 빌드
./gradlew user-service:build
```
2. **Docker push 실패**:
- ACR_USERNAME, ACR_PASSWORD Secrets 확인
- ACR에 로그인 권한 확인
3. **Deploy 단계 실패**:
- AZURE_CREDENTIALS Secret 확인
- Service Principal 권한 확인
- AKS 클러스터 접근 권한 확인
### 문제 6: Kustomize 빌드 실패
**증상**:
```
Error: unable to find one or more resources
```
**해결**:
```bash
# 로컬에서 Kustomize 빌드 테스트
cd .github/kustomize/overlays/dev
kustomize build .
# 리소스 파일 존재 확인
ls -la ../../base/
# kustomization.yaml의 resources 경로 확인
cat kustomization.yaml
```
---
## 모니터링 및 로깅
### 1. 실시간 로그 모니터링
```bash
# 모든 서비스 로그 스트리밍
kubectl logs -n kt-event-marketing -l app.kubernetes.io/part-of=kt-event-marketing -f
# 특정 서비스 로그
kubectl logs -n kt-event-marketing -l app=user-service -f
```
### 2. 리소스 사용량 모니터링
```bash
# Pod 리소스 사용량
kubectl top pods -n kt-event-marketing
# Node 리소스 사용량
kubectl top nodes
```
### 3. 이벤트 확인
```bash
# 네임스페이스 이벤트
kubectl get events -n kt-event-marketing --sort-by='.lastTimestamp'
# 특정 Pod 이벤트
kubectl describe pod -n kt-event-marketing user-service-xxx
```
---
## 롤백
### 1. Deployment 롤백
```bash
# 이전 버전으로 롤백
kubectl rollout undo deployment/user-service -n kt-event-marketing
# 특정 리비전으로 롤백
kubectl rollout history deployment/user-service -n kt-event-marketing
kubectl rollout undo deployment/user-service --to-revision=2 -n kt-event-marketing
```
### 2. 이미지 태그로 롤백
```bash
# 특정 이미지 버전으로 변경
kubectl set image deployment/user-service \
user-service=acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service:{previous-sha} \
-n kt-event-marketing
# 롤아웃 상태 확인
kubectl rollout status deployment/user-service -n kt-event-marketing
```
---
## 참고 자료
- [GitHub Actions 공식 문서](https://docs.github.com/en/actions)
- [Kustomize 공식 문서](https://kustomize.io/)
- [Azure AKS 공식 문서](https://docs.microsoft.com/en-us/azure/aks/)
- [Kubernetes 공식 문서](https://kubernetes.io/docs/)
---
## 변경 이력
| 날짜 | 버전 | 변경 내용 | 작성자 |
|------|------|-----------|--------|
| 2025-10-29 | 1.0.0 | 초기 CI/CD 파이프라인 구축 | DevOps Team |

View File

@ -0,0 +1,288 @@
# GitHub Actions CI/CD 파이프라인 구축 완료
## 작업 일시
2025-10-29
## 작업 내용
### 1. Kustomize 디렉토리 구조 생성 ✅
```
.github/kustomize/
├── base/ # 기본 매니페스트 (35개 파일)
│ ├── kustomization.yaml # 기본 리소스 정의
│ ├── cm-common.yaml
│ ├── secret-common.yaml
│ ├── secret-imagepull.yaml
│ ├── ingress.yaml
│ └── {service}-*.yaml # 7개 서비스 × 4개 리소스
└── overlays/ # 환경별 설정
├── dev/ # Dev 환경 (9개 파일)
│ ├── kustomization.yaml
│ └── *-service-patch.yaml # 7개 서비스
├── staging/ # Staging 환경 (9개 파일)
│ ├── kustomization.yaml
│ └── *-service-patch.yaml # 7개 서비스
└── prod/ # Prod 환경 (9개 파일)
├── kustomization.yaml
└── *-service-patch.yaml # 7개 서비스
```
**총 파일 수**: 62개
### 2. GitHub Actions 워크플로우 생성 ✅
**파일**: `.github/workflows/backend-cicd.yaml`
**주요 기능**:
- 변경된 서비스 자동 감지
- 병렬 빌드 및 테스트
- ACR에 Docker 이미지 푸시
- Kustomize를 사용한 AKS 배포
- 배포 검증 및 알림
**트리거**:
- develop 브랜치 push → dev 환경 자동 배포
- main 브랜치 push → prod 환경 자동 배포
- Manual workflow dispatch → 원하는 환경/서비스 선택 배포
**Jobs**:
1. `detect-changes`: 변경된 서비스 및 환경 감지
2. `build-and-push`: 서비스별 병렬 빌드 및 푸시
3. `deploy`: Kustomize 기반 AKS 배포
4. `notify`: 배포 결과 알림
### 3. 배포 스크립트 생성 ✅
**디렉토리**: `.github/scripts/`
1. **deploy.sh**
- 로컬에서 수동 배포를 위한 메인 스크립트
- 환경별 배포 설정 자동 적용
- 사용법: `./deploy.sh <env> [service]`
2. **generate-patches.sh**
- Staging과 Prod 환경의 패치 파일 자동 생성
- 리소스 할당량을 환경에 맞게 설정
3. **copy-manifests-to-base.py**
- 기존 K8s 매니페스트를 base 디렉토리로 복사
- namespace 선언 자동 제거
### 4. 환경별 설정 파일 생성 ✅
**디렉토리**: `.github/config/`
1. **deploy_env_vars_dev**
- Dev 환경 변수 정의
- Replicas: 1, CPU: 256m-1024m, Memory: 256Mi-1024Mi
2. **deploy_env_vars_staging**
- Staging 환경 변수 정의
- Replicas: 2, CPU: 512m-2048m, Memory: 512Mi-2048Mi
3. **deploy_env_vars_prod**
- Prod 환경 변수 정의
- Replicas: 3, CPU: 1024m-4096m, Memory: 1024Mi-4096Mi
### 5. 문서화 완료 ✅
1. **deployment/cicd/CICD-GUIDE.md** (15KB)
- 전체 CI/CD 파이프라인 가이드
- 사전 요구사항 및 설정 방법
- 상세한 트러블슈팅 가이드
- 모니터링 및 롤백 방법
2. **.github/README.md** (6KB)
- CI/CD 인프라 구조 설명
- 배포 프로세스 가이드
- 환경별 설정 요약
3. **deployment/cicd/SETUP-SUMMARY.md** (이 파일)
- 구축 완료 요약
## 시스템 정보
| 항목 | 내용 |
|------|------|
| 시스템명 | kt-event-marketing |
| JDK 버전 | 21 |
| 빌드 도구 | Gradle |
| ACR | acrdigitalgarage01.azurecr.io |
| AKS 클러스터 | aks-digitalgarage-01 |
| 리소스 그룹 | rg-digitalgarage-01 |
| 네임스페이스 | kt-event-marketing |
| 서비스 수 | 7개 |
## 서비스 목록
1. **user-service** (8081)
2. **event-service** (8082)
3. **ai-service** (8083)
4. **content-service** (8084)
5. **distribution-service** (8085)
6. **participation-service** (8086)
7. **analytics-service** (8087)
## 환경별 설정
| 환경 | 브랜치 | 이미지 태그 | Replicas | CPU Limit | Memory Limit |
|------|--------|-------------|----------|-----------|--------------|
| Dev | develop | dev | 1 | 1024m | 1024Mi |
| Staging | manual | staging | 2 | 2048m | 2048Mi |
| Prod | main | prod | 3 | 4096m | 4096Mi |
## 다음 단계 (Required)
### 1. GitHub Secrets 설정 필수
다음 Secrets를 GitHub Repository에 등록해야 합니다:
```bash
# ACR 자격증명 확인
az acr credential show --name acrdigitalgarage01
# Service Principal 생성
az ad sp create-for-rbac \
--name "github-actions-kt-event-marketing" \
--role contributor \
--scopes /subscriptions/{subscription-id}/resourceGroups/rg-digitalgarage-01 \
--sdk-auth
```
**필수 Secrets**:
1. `ACR_USERNAME` - ACR 사용자명
2. `ACR_PASSWORD` - ACR 패스워드
3. `AZURE_CREDENTIALS` - Service Principal JSON
**등록 경로**:
GitHub Repository → Settings → Secrets and variables → Actions → New repository secret
### 2. 초기 배포 테스트
```bash
# 로컬에서 Kustomize 빌드 테스트
cd .github/kustomize/overlays/dev
kustomize build .
# Azure 로그인
az login
# AKS 자격증명 가져오기
az aks get-credentials \
--resource-group rg-digitalgarage-01 \
--name aks-digitalgarage-01
# Dev 환경 배포 테스트
./.github/scripts/deploy.sh dev
```
### 3. GitHub Actions 워크플로우 테스트
```bash
# develop 브랜치에 커밋하여 자동 배포 트리거
git checkout develop
git add .
git commit -m "ci: Add GitHub Actions CI/CD pipeline"
git push origin develop
```
또는 GitHub Actions UI에서 Manual workflow dispatch 실행
## 주요 특징
### 1. Kustomize 기반 환경 관리
- Base + Overlays 패턴으로 중복 최소화
- 환경별 리소스 할당량 자동 적용
- 이미지 태그 환경별 분리 (dev/staging/prod)
### 2. 자동 변경 감지
- Git diff를 통한 변경된 서비스 자동 감지
- 변경된 서비스만 빌드 및 배포하여 시간 절약
- Manual trigger 시 전체 또는 특정 서비스 선택 가능
### 3. 병렬 처리
- 7개 서비스를 병렬로 빌드하여 시간 단축
- Matrix strategy를 사용한 효율적인 CI
### 4. 안전한 배포
- Startup, Readiness, Liveness probe 설정
- 롤아웃 상태 자동 확인
- 배포 실패 시 자동 알림
### 5. 다단계 이미지 태깅
- 환경별 태그: dev/staging/prod
- Git SHA 태그: 추적성 확보
- latest 태그: 최신 버전 유지
## 파일 통계
| 카테고리 | 파일 수 | 설명 |
|----------|---------|------|
| Kustomize Base | 35 | 기본 매니페스트 |
| Kustomize Dev Overlay | 9 | Dev 환경 설정 |
| Kustomize Staging Overlay | 9 | Staging 환경 설정 |
| Kustomize Prod Overlay | 9 | Prod 환경 설정 |
| Workflows | 1 | GitHub Actions 워크플로우 |
| Scripts | 3 | 배포 및 유틸리티 스크립트 |
| Config | 3 | 환경별 설정 파일 |
| Documentation | 3 | 가이드 문서 |
| **Total** | **72** | **전체 파일** |
## 기술 스택
- **CI/CD**: GitHub Actions
- **Container Registry**: Azure Container Registry
- **Orchestration**: Azure Kubernetes Service (AKS)
- **Manifest Management**: Kustomize
- **Build Tool**: Gradle
- **Runtime**: OpenJDK 21
- **Containerization**: Docker (multi-stage builds)
## 참고 사항
### 현재 AKS 배포 상태
현재 7개 서비스가 모두 AKS에 배포되어 Running 상태입니다:
- user-service: 1/1 Running
- event-service: 1/1 Running
- ai-service: 1/1 Running
- content-service: 1/1 Running
- distribution-service: 1/1 Running
- participation-service: 1/1 Running
- analytics-service: 1/1 Running
### 기존 배포 방식과의 호환성
- 기존 K8s 매니페스트 (`deployment/k8s/`)는 그대로 유지
- Kustomize는 별도 경로 (`.github/kustomize/`)에 구성
- 두 방식 모두 사용 가능하나 CI/CD에서는 Kustomize 사용 권장
### 모니터링 및 로깅
- Kubernetes 기본 모니터링 사용
- Azure Monitor 통합 가능 (별도 설정 필요)
- Application Insights 연동 가능 (별도 설정 필요)
## 문의 및 지원
- CI/CD 관련 문제: [deployment/cicd/CICD-GUIDE.md](./CICD-GUIDE.md) 참조
- 인프라 구조: [.github/README.md](../../.github/README.md) 참조
- 트러블슈팅: CICD-GUIDE.md의 트러블슈팅 섹션 참조
## 완료 체크리스트
- [x] Kustomize base 디렉토리 생성 및 매니페스트 복사
- [x] 환경별 overlay 디렉토리 생성 (dev/staging/prod)
- [x] 환경별 패치 파일 생성
- [x] GitHub Actions 워크플로우 작성
- [x] 배포 스크립트 작성
- [x] 환경별 설정 파일 작성
- [x] CI/CD 가이드 문서 작성
- [x] README 문서 작성
- [ ] GitHub Secrets 설정 (사용자 작업 필요)
- [ ] 초기 배포 테스트 (사용자 작업 필요)
- [ ] 워크플로우 동작 확인 (사용자 작업 필요)
---
**작성일**: 2025-10-29
**작성자**: Claude (DevOps Assistant)
**버전**: 1.0.0

View File

@ -42,21 +42,21 @@ spec:
memory: "1024Mi" memory: "1024Mi"
startupProbe: startupProbe:
httpGet: httpGet:
path: /actuator/health path: /api/v1/ai-service/actuator/health
port: 8083 port: 8083
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 10 periodSeconds: 10
failureThreshold: 30 failureThreshold: 30
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /actuator/health/readiness path: /api/v1/ai-service/actuator/health/readiness
port: 8083 port: 8083
initialDelaySeconds: 10 initialDelaySeconds: 10
periodSeconds: 5 periodSeconds: 5
failureThreshold: 3 failureThreshold: 3
livenessProbe: livenessProbe:
httpGet: httpGet:
path: /actuator/health/liveness path: /api/v1/ai-service/actuator/health/liveness
port: 8083 port: 8083
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 10 periodSeconds: 10

View File

@ -42,21 +42,21 @@ spec:
memory: "1024Mi" memory: "1024Mi"
startupProbe: startupProbe:
httpGet: httpGet:
path: /actuator/health/liveness path: /api/v1/analytics/actuator/health/liveness
port: 8086 port: 8086
initialDelaySeconds: 60 initialDelaySeconds: 60
periodSeconds: 10 periodSeconds: 10
failureThreshold: 30 failureThreshold: 30
livenessProbe: livenessProbe:
httpGet: httpGet:
path: /actuator/health/liveness path: /api/v1/analytics/actuator/health/liveness
port: 8086 port: 8086
initialDelaySeconds: 0 initialDelaySeconds: 0
periodSeconds: 10 periodSeconds: 10
failureThreshold: 3 failureThreshold: 3
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /actuator/health/readiness path: /api/v1/analytics/actuator/health/readiness
port: 8086 port: 8086
initialDelaySeconds: 0 initialDelaySeconds: 0
periodSeconds: 10 periodSeconds: 10

View File

@ -89,18 +89,9 @@ spec:
port: port:
number: 80 number: 80
# Analytics Service - Event Analytics # Analytics Service
- path: /api/v1/events/([0-9]+)/analytics - path: /api/v1/analytics
pathType: ImplementationSpecific pathType: Prefix
backend:
service:
name: analytics-service
port:
number: 80
# Analytics Service - User Analytics
- path: /api/v1/users/([0-9]+)/analytics
pathType: ImplementationSpecific
backend: backend:
service: service:
name: analytics-service name: analytics-service

View File

@ -42,21 +42,21 @@ spec:
memory: "1024Mi" memory: "1024Mi"
startupProbe: startupProbe:
httpGet: httpGet:
path: /actuator/health path: /distribution/actuator/health
port: 8085 port: 8085
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 10 periodSeconds: 10
failureThreshold: 30 failureThreshold: 30
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /actuator/health/readiness path: /distribution/actuator/health/readiness
port: 8085 port: 8085
initialDelaySeconds: 10 initialDelaySeconds: 10
periodSeconds: 5 periodSeconds: 5
failureThreshold: 3 failureThreshold: 3
livenessProbe: livenessProbe:
httpGet: httpGet:
path: /actuator/health/liveness path: /distribution/actuator/health/liveness
port: 8085 port: 8085
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 10 periodSeconds: 10

View File

@ -42,21 +42,21 @@ spec:
memory: "1024Mi" memory: "1024Mi"
startupProbe: startupProbe:
httpGet: httpGet:
path: /actuator/health path: /api/v1/events/actuator/health
port: 8080 port: 8080
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 10 periodSeconds: 10
failureThreshold: 30 failureThreshold: 30
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /actuator/health/readiness path: /api/v1/events/actuator/health/readiness
port: 8080 port: 8080
initialDelaySeconds: 10 initialDelaySeconds: 10
periodSeconds: 5 periodSeconds: 5
failureThreshold: 3 failureThreshold: 3
livenessProbe: livenessProbe:
httpGet: httpGet:
path: /actuator/health/liveness path: /api/v1/events/actuator/health/liveness
port: 8080 port: 8080
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 10 periodSeconds: 10

View File

@ -42,21 +42,21 @@ spec:
memory: "1024Mi" memory: "1024Mi"
startupProbe: startupProbe:
httpGet: httpGet:
path: /actuator/health/liveness path: /api/v1/participations/actuator/health/liveness
port: 8084 port: 8084
initialDelaySeconds: 60 initialDelaySeconds: 60
periodSeconds: 10 periodSeconds: 10
failureThreshold: 30 failureThreshold: 30
livenessProbe: livenessProbe:
httpGet: httpGet:
path: /actuator/health/liveness path: /api/v1/participations/actuator/health/liveness
port: 8084 port: 8084
initialDelaySeconds: 0 initialDelaySeconds: 0
periodSeconds: 10 periodSeconds: 10
failureThreshold: 3 failureThreshold: 3
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /actuator/health/readiness path: /api/v1/participations/actuator/health/readiness
port: 8084 port: 8084
initialDelaySeconds: 0 initialDelaySeconds: 0
periodSeconds: 10 periodSeconds: 10

View File

@ -42,21 +42,21 @@ spec:
memory: "1024Mi" memory: "1024Mi"
startupProbe: startupProbe:
httpGet: httpGet:
path: /actuator/health path: /api/v1/users/actuator/health
port: 8081 port: 8081
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 10 periodSeconds: 10
failureThreshold: 30 failureThreshold: 30
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /actuator/health/readiness path: /api/v1/users/actuator/health/readiness
port: 8081 port: 8081
initialDelaySeconds: 10 initialDelaySeconds: 10
periodSeconds: 5 periodSeconds: 5
failureThreshold: 3 failureThreshold: 3
livenessProbe: livenessProbe:
httpGet: httpGet:
path: /actuator/health/liveness path: /api/v1/users/actuator/health/liveness
port: 8081 port: 8081
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 10 periodSeconds: 10

View File

@ -0,0 +1,24 @@
# Multi-stage build for Spring Boot application
FROM eclipse-temurin:21-jre-alpine AS builder
WORKDIR /app
COPY build/libs/*.jar app.jar
RUN java -Djarmode=layertools -jar app.jar extract
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
# Copy layers from builder
COPY --from=builder /app/dependencies/ ./
COPY --from=builder /app/spring-boot-loader/ ./
COPY --from=builder /app/snapshot-dependencies/ ./
COPY --from=builder /app/application/ ./
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=60s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8085/distribution/actuator/health || exit 1
ENTRYPOINT ["java", "org.springframework.boot.loader.launch.JarLauncher"]

View File

@ -1,6 +1,3 @@
server:
port: 8085
spring: spring:
application: application:
name: distribution-service name: distribution-service
@ -67,6 +64,12 @@ kafka:
topics: topics:
distribution-completed: distribution-completed distribution-completed: distribution-completed
# Server Configuration
server:
port: ${SERVER_PORT:8085}
servlet:
context-path: /distribution
# Resilience4j Configuration # Resilience4j Configuration
resilience4j: resilience4j:
circuitbreaker: circuitbreaker:

24
event-service/Dockerfile Normal file
View File

@ -0,0 +1,24 @@
# Multi-stage build for Spring Boot application
FROM eclipse-temurin:21-jre-alpine AS builder
WORKDIR /app
COPY build/libs/*.jar app.jar
RUN java -Djarmode=layertools -jar app.jar extract
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
# Copy layers from builder
COPY --from=builder /app/dependencies/ ./
COPY --from=builder /app/spring-boot-loader/ ./
COPY --from=builder /app/snapshot-dependencies/ ./
COPY --from=builder /app/application/ ./
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=60s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8080/api/v1/events/actuator/health || exit 1
ENTRYPOINT ["java", "org.springframework.boot.loader.launch.JarLauncher"]

View File

@ -71,7 +71,7 @@ spring:
server: server:
port: ${SERVER_PORT:8080} port: ${SERVER_PORT:8080}
servlet: servlet:
context-path: / context-path: /api/v1/events
shutdown: graceful shutdown: graceful
# Actuator Configuration # Actuator Configuration

View File

@ -0,0 +1,24 @@
# Multi-stage build for Spring Boot application
FROM eclipse-temurin:21-jre-alpine AS builder
WORKDIR /app
COPY build/libs/*.jar app.jar
RUN java -Djarmode=layertools -jar app.jar extract
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
# Copy layers from builder
COPY --from=builder /app/dependencies/ ./
COPY --from=builder /app/spring-boot-loader/ ./
COPY --from=builder /app/snapshot-dependencies/ ./
COPY --from=builder /app/application/ ./
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=60s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8084/api/v1/participations/actuator/health || exit 1
ENTRYPOINT ["java", "org.springframework.boot.loader.launch.JarLauncher"]

View File

@ -57,6 +57,8 @@ jwt:
# 서버 설정 # 서버 설정
server: server:
port: ${SERVER_PORT:8084} port: ${SERVER_PORT:8084}
servlet:
context-path: /api/v1/participations
# 로깅 설정 # 로깅 설정
logging: logging:

24
user-service/Dockerfile Normal file
View File

@ -0,0 +1,24 @@
# Multi-stage build for Spring Boot application
FROM eclipse-temurin:21-jre-alpine AS builder
WORKDIR /app
COPY build/libs/*.jar app.jar
RUN java -Djarmode=layertools -jar app.jar extract
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
# Copy layers from builder
COPY --from=builder /app/dependencies/ ./
COPY --from=builder /app/spring-boot-loader/ ./
COPY --from=builder /app/snapshot-dependencies/ ./
COPY --from=builder /app/application/ ./
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=60s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8081/api/v1/users/actuator/health || exit 1
ENTRYPOINT ["java", "org.springframework.boot.loader.launch.JarLauncher"]

View File

@ -121,3 +121,5 @@ logging:
# Server Configuration # Server Configuration
server: server:
port: ${SERVER_PORT:8081} port: ${SERVER_PORT:8081}
servlet:
context-path: /api/v1/users