Compare commits

..

1 Commits

Author SHA1 Message Date
hyeda2020
be4fcc0dc3
Merge pull request #15 from ktds-dg0501/develop
Develop
2025-10-28 13:16:23 +09:00
469 changed files with 3533 additions and 42580 deletions

View File

@ -1,13 +1,10 @@
--- ---
command: "/deploy-actions-cicd-guide-back" command: "/deploy-actions-cicd-guide-back"
description: "백엔드 GitHub Actions CI/CD 파이프라인 가이드 작성"
--- ---
@cicd @cicd
'백엔드GitHubActions파이프라인작성가이드'에 따라 GitHub Actions를 이용한 CI/CD 가이드를 작성해 주세요. '백엔드GitHubActions파이프라인작성가이드'에 따라 GitHub Actions를 이용한 CI/CD 가이드를 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요. 프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지} {안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요. '[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보] [실행정보]

View File

@ -1,13 +1,10 @@
--- ---
command: "/deploy-actions-cicd-guide-front" command: "/deploy-actions-cicd-guide-front"
description: "프론트엔드 GitHub Actions CI/CD 파이프라인 가이드 작성"
--- ---
@cicd @cicd
'프론트엔드GitHubActions파이프라인작성가이드'에 따라 GitHub Actions를 이용한 CI/CD 가이드를 작성해 주세요. '프론트엔드GitHubActions파이프라인작성가이드'에 따라 GitHub Actions를 이용한 CI/CD 가이드를 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요. 프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지} {안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요. '[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보] [실행정보]

View File

@ -1,6 +1,5 @@
--- ---
command: "/deploy-build-image-back" command: "/deploy-build-image-back"
description: "백엔드 컨테이너 이미지 작성"
--- ---
@cicd @cicd

View File

@ -1,6 +1,5 @@
--- ---
command: "/deploy-build-image-front" command: "/deploy-build-image-front"
description: "프론트엔드 컨테이너 이미지 작성"
--- ---
@cicd @cicd

View File

@ -1,64 +1,81 @@
--- ---
command: "/deploy-help" command: "/deploy-help"
description: "배포 작업 순서 및 명령어 안내"
--- ---
# 배포 작업 순서 # 배포 작업 순서
## 컨테이너 이미지 작성 ## 1단계: 컨테이너 이미지 작성
### 백엔드 ### 백엔드
```
/deploy-build-image-back /deploy-build-image-back
- 백엔드 서비스들의 컨테이너 이미지를 작성합니다 ```
- 백엔드컨테이너이미지작성가이드를 참고하여 컨테이너 이미지를 빌드합니다
### 프론트엔드 ### 프론트엔드
```
/deploy-build-image-front /deploy-build-image-front
- 프론트엔드 서비스의 컨테이너 이미지를 작성합니다 ```
- 프론트엔드컨테이너이미지작성가이드를 참고하여 컨테이너 이미지를 빌드합니다
## 컨테이너 실행 가이드 작성 ## 2단계: 컨테이너 실행 가이드 작성
### 백엔드 ### 백엔드
```
/deploy-run-container-guide-back /deploy-run-container-guide-back
- 백엔드 컨테이너 실행 가이드를 작성합니다 ```
- [실행정보] 섹션에 ACR명, VM 접속 정보 제공 필요 - 백엔드컨테이너실행방법가이드를 참고하여 컨테이너 실행 방법을 작성합니다
- 실행정보(ACR명, VM정보)가 필요합니다
### 프론트엔드 ### 프론트엔드
```
/deploy-run-container-guide-front /deploy-run-container-guide-front
- 프론트엔드 컨테이너 실행 가이드를 작성합니다 ```
- [실행정보] 섹션에 시스템명, ACR명, VM 접속 정보 제공 필요 - 프론트엔드컨테이너실행방법가이드를 참고하여 컨테이너 실행 방법을 작성합니다
- 실행정보(시스템명, ACR명, VM정보)가 필요합니다
## Kubernetes 배포 가이드 작성 ## 3단계: Kubernetes 배포 가이드 작성
### 백엔드 ### 백엔드
```
/deploy-k8s-guide-back /deploy-k8s-guide-back
- 백엔드 서비스 Kubernetes 배포 가이드를 작성합니다 ```
- [실행정보] 섹션에 ACR명, k8s명, 네임스페이스, 리소스 정보 제공 필요 - 백엔드배포가이드를 참고하여 쿠버네티스 배포 방법을 작성합니다
- 실행정보(ACR명, k8s명, 네임스페이스, 리소스 설정)가 필요합니다
### 프론트엔드 ### 프론트엔드
```
/deploy-k8s-guide-front /deploy-k8s-guide-front
- 프론트엔드 서비스 Kubernetes 배포 가이드를 작성합니다 ```
- [실행정보] 섹션에 시스템명, ACR명, k8s명, 네임스페이스, Gateway Host 정보 제공 필요 - 프론트엔드배포가이드를 참고하여 쿠버네티스 배포 방법을 작성합니다
- 실행정보(시스템명, ACR명, k8s명, 네임스페이스, Gateway Host, 리소스 설정)가 필요합니다
## CI/CD 파이프라인 작성 ## 4단계: CI/CD 파이프라인 구성
### Jenkins CI/CD
### Jenkins 사용 시
#### 백엔드 #### 백엔드
```
/deploy-jenkins-cicd-guide-back /deploy-jenkins-cicd-guide-back
- Jenkins를 이용한 백엔드 CI/CD 파이프라인 가이드를 작성합니다 ```
- [실행정보] 섹션에 ACR_NAME, RESOURCE_GROUP, AKS_CLUSTER, NAMESPACE 제공 필요 - 백엔드Jenkins파이프라인작성가이드를 참고하여 Jenkins CI/CD 파이프라인을 구성합니다
#### 프론트엔드 #### 프론트엔드
```
/deploy-jenkins-cicd-guide-front /deploy-jenkins-cicd-guide-front
- Jenkins를 이용한 프론트엔드 CI/CD 파이프라인 가이드를 작성합니다 ```
- [실행정보] 섹션에 SYSTEM_NAME, ACR_NAME, RESOURCE_GROUP, AKS_CLUSTER, NAMESPACE 제공 필요 - 프론트엔드Jenkins파이프라인작성가이드를 참고하여 Jenkins CI/CD 파이프라인을 구성합니다
### GitHub Actions CI/CD ### GitHub Actions 사용 시
#### 백엔드 #### 백엔드
```
/deploy-actions-cicd-guide-back /deploy-actions-cicd-guide-back
- GitHub Actions를 이용한 백엔드 CI/CD 파이프라인 가이드를 작성합니다 ```
- [실행정보] 섹션에 ACR_NAME, RESOURCE_GROUP, AKS_CLUSTER, NAMESPACE 제공 필요 - 백엔드GitHubActions파이프라인작성가이드를 참고하여 GitHub Actions CI/CD 파이프라인을 구성합니다
#### 프론트엔드 #### 프론트엔드
```
/deploy-actions-cicd-guide-front /deploy-actions-cicd-guide-front
- GitHub Actions를 이용한 프론트엔드 CI/CD 파이프라인 가이드를 작성합니다 ```
- [실행정보] 섹션에 SYSTEM_NAME, ACR_NAME, RESOURCE_GROUP, AKS_CLUSTER, NAMESPACE 제공 필요 - 프론트엔드GitHubActions파이프라인작성가이드를 참고하여 GitHub Actions CI/CD 파이프라인을 구성합니다
--- ## 참고사항
- 각 명령 실행 전 필요한 실행정보를 프롬프트에 포함해야 합니다
**참고**: 각 명령어 실행 시 [실행정보] 섹션에 필요한 정보를 함께 제공해야 합니다. - 실행정보가 없으면 안내 메시지가 표시되며 작업이 중단됩니다
- CI/CD 도구는 Jenkins 또는 GitHub Actions 중 선택하여 사용합니다

View File

@ -1,13 +1,10 @@
--- ---
command: "/deploy-jenkins-cicd-guide-back" command: "/deploy-jenkins-cicd-guide-back"
description: "백엔드 Jenkins CI/CD 파이프라인 가이드 작성"
--- ---
@cicd @cicd
'백엔드Jenkins파이프라인작성가이드'에 따라 Jenkins를 이용한 CI/CD 가이드를 작성해 주세요. '백엔드Jenkins파이프라인작성가이드'에 따라 Jenkins를 이용한 CI/CD 가이드를 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요. 프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지} {안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요. '[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보] [실행정보]

View File

@ -1,13 +1,10 @@
--- ---
command: "/deploy-jenkins-cicd-guide-front" command: "/deploy-jenkins-cicd-guide-front"
description: "프론트엔드 Jenkins CI/CD 파이프라인 가이드 작성"
--- ---
@cicd @cicd
'프론트엔드Jenkins파이프라인작성가이드'에 따라 Jenkins를 이용한 CI/CD 가이드를 작성해 주세요. '프론트엔드Jenkins파이프라인작성가이드'에 따라 Jenkins를 이용한 CI/CD 가이드를 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요. 프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지} {안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요. '[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보] [실행정보]

View File

@ -1,13 +1,10 @@
--- ---
command: "/deploy-k8s-guide-back" command: "/deploy-k8s-guide-back"
description: "백엔드 Kubernetes 배포 가이드 작성"
--- ---
@cicd @cicd
'백엔드배포가이드'에 따라 백엔드 서비스 배포 방법을 작성해 주세요. '백엔드배포가이드'에 따라 백엔드 서비스 배포 방법을 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요. 프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지} {안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요. '[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보] [실행정보]

View File

@ -1,13 +1,10 @@
--- ---
command: "/deploy-k8s-guide-front" command: "/deploy-k8s-guide-front"
description: "프론트엔드 Kubernetes 배포 가이드 작성"
--- ---
@cicd @cicd
'프론트엔드배포가이드'에 따라 프론트엔드 서비스 배포 방법을 작성해 주세요. '프론트엔드배포가이드'에 따라 프론트엔드 서비스 배포 방법을 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요. 프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지} {안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요. '[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보] [실행정보]

View File

@ -1,13 +1,10 @@
--- ---
command: "/deploy-run-container-guide-back" command: "/deploy-run-container-guide-back"
description: "백엔드 컨테이너 실행방법 가이드 작성"
--- ---
@cicd @cicd
'백엔드컨테이너실행방법가이드'에 따라 컨테이너 실행 가이드를 작성해 주세요. '백엔드컨테이너실행방법가이드'에 따라 컨테이너 실행 가이드를 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요. 프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지} {안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요. '[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보] [실행정보]

View File

@ -1,13 +1,10 @@
--- ---
command: "/deploy-run-container-guide-front" command: "/deploy-run-container-guide-front"
description: "프론트엔드 컨테이너 실행방법 가이드 작성"
--- ---
@cicd @cicd
'프론트엔드컨테이너실행방법가이드'에 따라 컨테이너 실행 가이드를 작성해 주세요. '프론트엔드컨테이너실행방법가이드'에 따라 컨테이너 실행 가이드를 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요. 프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지} {안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요. '[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보] [실행정보]

186
.github/README.md vendored
View File

@ -1,186 +0,0 @@
# KT Event Marketing - CI/CD Infrastructure
이 디렉토리는 KT Event Marketing 백엔드 서비스의 CI/CD 인프라를 포함합니다.
## 디렉토리 구조
```
.github/
├── README.md # 이 파일
├── workflows/
│ └── backend-cicd.yaml # GitHub Actions 워크플로우
├── kustomize/ # Kubernetes 매니페스트 관리
│ ├── base/ # 기본 리소스 정의
│ │ ├── kustomization.yaml
│ │ ├── cm-common.yaml
│ │ ├── secret-common.yaml
│ │ ├── secret-imagepull.yaml
│ │ ├── ingress.yaml
│ │ └── {service}-*.yaml # 각 서비스별 리소스
│ └── overlays/ # 환경별 설정
│ ├── dev/
│ │ ├── kustomization.yaml
│ │ └── *-patch.yaml # 1 replica, 256Mi-1024Mi
│ ├── staging/
│ │ ├── kustomization.yaml
│ │ └── *-patch.yaml # 2 replicas, 512Mi-2048Mi
│ └── prod/
│ ├── kustomization.yaml
│ └── *-patch.yaml # 3 replicas, 1024Mi-4096Mi
├── config/
│ ├── deploy_env_vars_dev # Dev 환경 변수
│ ├── deploy_env_vars_staging # Staging 환경 변수
│ └── deploy_env_vars_prod # Prod 환경 변수
└── scripts/
├── deploy.sh # 수동 배포 스크립트
├── generate-patches.sh # 패치 파일 생성 스크립트
└── copy-manifests-to-base.py # 매니페스트 복사 스크립트
```
## 주요 파일 설명
### workflows/backend-cicd.yaml
GitHub Actions 워크플로우 정의 파일입니다.
**트리거**:
- develop 브랜치 push → dev 환경 배포
- main 브랜치 push → prod 환경 배포
- Manual workflow dispatch → 원하는 환경과 서비스 선택
**Jobs**:
1. `detect-changes`: 변경된 서비스 감지
2. `build-and-push`: 서비스 빌드 및 ACR 푸시
3. `deploy`: AKS에 배포
4. `notify`: 배포 결과 알림
### kustomize/base/kustomization.yaml
모든 환경에서 공통으로 사용하는 기본 리소스를 정의합니다.
**포함 리소스**:
- Common ConfigMaps and Secrets
- Ingress
- 7개 서비스의 Deployment, Service, ConfigMap, Secret
### kustomize/overlays/{env}/kustomization.yaml
환경별 설정을 오버라이드합니다.
**주요 차이점**:
- 이미지 태그 (dev/staging/prod)
- Replica 수 (1/2/3)
- 리소스 할당량 (작음/중간/큼)
### scripts/deploy.sh
로컬에서 수동 배포를 위한 스크립트입니다.
**사용법**:
```bash
# 모든 서비스를 dev 환경에 배포
./scripts/deploy.sh dev
# 특정 서비스만 prod 환경에 배포
./scripts/deploy.sh prod user-service
```
## 배포 프로세스
### 자동 배포 (GitHub Actions)
1. **Dev 환경**:
```bash
git checkout develop
git push origin develop
```
2. **Prod 환경**:
```bash
git checkout main
git merge develop
git push origin main
```
3. **수동 배포**:
- GitHub Actions UI → Run workflow
- Environment 선택 (dev/staging/prod)
- Service 선택 (all 또는 특정 서비스)
### 수동 배포 (로컬)
```bash
# 사전 요구사항: Azure CLI, kubectl, kustomize 설치
# Azure 로그인 필요
# Dev 환경에 모든 서비스 배포
./.github/scripts/deploy.sh dev
# Prod 환경에 user-service만 배포
./.github/scripts/deploy.sh prod user-service
```
## 환경별 설정
| 환경 | 브랜치 | 이미지 태그 | Replicas | CPU Request | Memory Request |
|------|--------|-------------|----------|-------------|----------------|
| Dev | develop | dev | 1 | 256m | 256Mi |
| Staging | manual | staging | 2 | 512m | 512Mi |
| Prod | main | prod | 3 | 1024m | 1024Mi |
## 서비스 목록
1. **user-service** (8081) - 사용자 관리
2. **event-service** (8082) - 이벤트 관리
3. **ai-service** (8083) - AI 기반 콘텐츠 생성
4. **content-service** (8084) - 콘텐츠 관리
5. **distribution-service** (8085) - 경품 배포
6. **participation-service** (8086) - 이벤트 참여
7. **analytics-service** (8087) - 분석 및 통계
## 모니터링
### Pod 상태 확인
```bash
kubectl get pods -n kt-event-marketing
```
### 로그 확인
```bash
# 실시간 로그
kubectl logs -n kt-event-marketing -l app=user-service -f
# 이전 컨테이너 로그
kubectl logs -n kt-event-marketing <pod-name> --previous
```
### 리소스 사용량
```bash
# Pod 리소스
kubectl top pods -n kt-event-marketing
# Node 리소스
kubectl top nodes
```
## 트러블슈팅
상세한 트러블슈팅 가이드는 [deployment/cicd/CICD-GUIDE.md](../../deployment/cicd/CICD-GUIDE.md)를 참조하세요.
**주요 문제 해결**:
- ImagePullBackOff → ACR Secret 확인
- CrashLoopBackOff → 로그 확인 및 환경 변수 검증
- Readiness Probe Failed → Context Path 및 Actuator 경로 확인
## 롤백
```bash
# 이전 버전으로 롤백
kubectl rollout undo deployment/user-service -n kt-event-marketing
# 특정 리비전으로 롤백
kubectl rollout undo deployment/user-service --to-revision=2 -n kt-event-marketing
```
## 참고 자료
- [CI/CD 가이드 (한글)](../../deployment/cicd/CICD-GUIDE.md)
- [GitHub Actions 공식 문서](https://docs.github.com/en/actions)
- [Kustomize 공식 문서](https://kustomize.io/)
- [Azure AKS 공식 문서](https://docs.microsoft.com/en-us/azure/aks/)

View File

@ -1,11 +0,0 @@
# Development Environment Variables
ENVIRONMENT=dev
ACR_NAME=acrdigitalgarage01
RESOURCE_GROUP=rg-digitalgarage-01
AKS_CLUSTER=aks-digitalgarage-01
NAMESPACE=kt-event-marketing
REPLICAS=1
CPU_REQUEST=256m
MEMORY_REQUEST=256Mi
CPU_LIMIT=1024m
MEMORY_LIMIT=1024Mi

View File

@ -1,11 +0,0 @@
# Production Environment Variables
ENVIRONMENT=prod
ACR_NAME=acrdigitalgarage01
RESOURCE_GROUP=rg-digitalgarage-01
AKS_CLUSTER=aks-digitalgarage-01
NAMESPACE=kt-event-marketing
REPLICAS=3
CPU_REQUEST=1024m
MEMORY_REQUEST=1024Mi
CPU_LIMIT=4096m
MEMORY_LIMIT=4096Mi

View File

@ -1,11 +0,0 @@
# Staging Environment Variables
ENVIRONMENT=staging
ACR_NAME=acrdigitalgarage01
RESOURCE_GROUP=rg-digitalgarage-01
AKS_CLUSTER=aks-digitalgarage-01
NAMESPACE=kt-event-marketing
REPLICAS=2
CPU_REQUEST=512m
MEMORY_REQUEST=512Mi
CPU_LIMIT=2048m
MEMORY_LIMIT=2048Mi

View File

@ -1,55 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-ai-service
data:
# Server Configuration
SERVER_PORT: "8083"
# Redis Configuration (service-specific)
REDIS_DATABASE: "3"
REDIS_TIMEOUT: "3000"
REDIS_POOL_MIN: "2"
# Kafka Configuration (service-specific)
KAFKA_CONSUMER_GROUP: "ai-service-consumers"
# Kafka Topics Configuration
KAFKA_TOPICS_AI_JOB: "ai-event-generation-job"
KAFKA_TOPICS_AI_JOB_DLQ: "ai-event-generation-job-dlq"
# AI Provider Configuration
AI_PROVIDER: "CLAUDE"
AI_CLAUDE_API_URL: "https://api.anthropic.com/v1/messages"
AI_CLAUDE_ANTHROPIC_VERSION: "2023-06-01"
AI_CLAUDE_MODEL: "claude-sonnet-4-5-20250929"
AI_CLAUDE_MAX_TOKENS: "4096"
AI_CLAUDE_TEMPERATURE: "0.7"
AI_CLAUDE_TIMEOUT: "300000"
# Circuit Breaker Configuration
RESILIENCE4J_CIRCUITBREAKER_FAILURE_RATE_THRESHOLD: "50"
RESILIENCE4J_CIRCUITBREAKER_SLOW_CALL_RATE_THRESHOLD: "50"
RESILIENCE4J_CIRCUITBREAKER_SLOW_CALL_DURATION_THRESHOLD: "60s"
RESILIENCE4J_CIRCUITBREAKER_PERMITTED_CALLS_HALF_OPEN: "3"
RESILIENCE4J_CIRCUITBREAKER_SLIDING_WINDOW_SIZE: "10"
RESILIENCE4J_CIRCUITBREAKER_MINIMUM_CALLS: "5"
RESILIENCE4J_CIRCUITBREAKER_WAIT_DURATION_OPEN: "60s"
RESILIENCE4J_TIMELIMITER_TIMEOUT_DURATION: "300s"
# Redis Cache TTL Configuration (seconds)
CACHE_TTL_RECOMMENDATION: "86400"
CACHE_TTL_JOB_STATUS: "86400"
CACHE_TTL_TREND: "3600"
CACHE_TTL_FALLBACK: "604800"
# Logging Configuration
LOG_LEVEL_ROOT: "INFO"
LOG_LEVEL_AI: "DEBUG"
LOG_LEVEL_KAFKA: "INFO"
LOG_LEVEL_REDIS: "INFO"
LOG_LEVEL_RESILIENCE4J: "DEBUG"
LOG_FILE_NAME: "logs/ai-service.log"
LOG_FILE_MAX_SIZE: "10MB"
LOG_FILE_MAX_HISTORY: "7"
LOG_FILE_TOTAL_CAP: "100MB"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-service
labels:
app: ai-service
spec:
replicas: 1
selector:
matchLabels:
app: ai-service
template:
metadata:
labels:
app: ai-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: ai-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8083
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-ai-service
- secretRef:
name: secret-common
- secretRef:
name: secret-ai-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health
port: 8083
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8083
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8083
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -1,8 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-ai-service
type: Opaque
stringData:
# Claude API Key
AI_CLAUDE_API_KEY: "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"

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: ai-service
labels:
app: ai-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8083
protocol: TCP
name: http
selector:
app: ai-service

View File

@ -1,37 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-analytics-service
data:
# Server Configuration
SERVER_PORT: "8086"
# Database Configuration
DB_HOST: "analytic-postgresql"
DB_PORT: "5432"
DB_NAME: "analytics_db"
DB_USERNAME: "eventuser"
# Redis Configuration (service-specific)
REDIS_DATABASE: "5"
# Kafka Configuration (service-specific)
KAFKA_ENABLED: "true"
KAFKA_CONSUMER_GROUP_ID: "analytics-service"
# Sample Data Configuration (MVP only)
SAMPLE_DATA_ENABLED: "true"
# Batch Scheduler Configuration
BATCH_REFRESH_INTERVAL: "300000" # 5분 (밀리초)
BATCH_INITIAL_DELAY: "30000" # 30초 (밀리초)
BATCH_ENABLED: "true"
# Logging Configuration
LOG_LEVEL_APP: "INFO"
LOG_LEVEL_WEB: "INFO"
LOG_LEVEL_SQL: "WARN"
LOG_LEVEL_SQL_TYPE: "WARN"
SHOW_SQL: "false"
DDL_AUTO: "update"
LOG_FILE: "logs/analytics-service.log"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: analytics-service
labels:
app: analytics-service
spec:
replicas: 1
selector:
matchLabels:
app: analytics-service
template:
metadata:
labels:
app: analytics-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: analytics-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8086
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-analytics-service
- secretRef:
name: secret-common
- secretRef:
name: secret-analytics-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health/liveness
port: 8086
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 30
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8086
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8086
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 3

View File

@ -1,7 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-analytics-service
type: Opaque
stringData:
DB_PASSWORD: "Hi5Jessica!"

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: analytics-service
labels:
app: analytics-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8086
protocol: TCP
name: http
selector:
app: analytics-service

View File

@ -1,46 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-common
data:
# Redis Configuration
REDIS_ENABLED: "true"
REDIS_HOST: "redis"
REDIS_PORT: "6379"
REDIS_TIMEOUT: "2000ms"
REDIS_POOL_MAX: "8"
REDIS_POOL_IDLE: "8"
REDIS_POOL_MIN: "0"
REDIS_POOL_WAIT: "-1ms"
# Kafka Configuration
KAFKA_BOOTSTRAP_SERVERS: "20.249.182.13:9095,4.217.131.59:9095"
EXCLUDE_KAFKA: ""
EXCLUDE_REDIS: ""
# CORS Configuration
CORS_ALLOWED_ORIGINS: "http://localhost:8081,http://localhost:8082,http://localhost:8083,http://localhost:8084,http://kt-event-marketing.20.214.196.128.nip.io"
CORS_ALLOWED_METHODS: "GET,POST,PUT,DELETE,OPTIONS,PATCH"
CORS_ALLOWED_HEADERS: "*"
CORS_ALLOW_CREDENTIALS: "true"
CORS_MAX_AGE: "3600"
# JWT Configuration
JWT_ACCESS_TOKEN_VALIDITY: "604800000"
JWT_REFRESH_TOKEN_VALIDITY: "86400000"
# JPA Configuration
DDL_AUTO: "update"
SHOW_SQL: "false"
JPA_DIALECT: "org.hibernate.dialect.PostgreSQLDialect"
H2_CONSOLE_ENABLED: "false"
# Logging Configuration
LOG_LEVEL_APP: "INFO"
LOG_LEVEL_WEB: "INFO"
LOG_LEVEL_SQL: "WARN"
LOG_LEVEL_SQL_TYPE: "WARN"
LOG_LEVEL_ROOT: "INFO"
LOG_FILE_MAX_SIZE: "10MB"
LOG_FILE_MAX_HISTORY: "7"
LOG_FILE_TOTAL_CAP: "100MB"

View File

@ -1,24 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-content-service
data:
# Server Configuration
SERVER_PORT: "8084"
# Redis Configuration (service-specific)
REDIS_DATABASE: "1"
# Replicate API Configuration (Stable Diffusion)
REPLICATE_API_URL: "https://api.replicate.com"
REPLICATE_MODEL_VERSION: "stability-ai/sdxl:39ed52f2a78e934b3ba6e2a89f5b1c712de7dfea535525255b1aa35c5565e08b"
# HuggingFace API Configuration
HUGGINGFACE_API_URL: "https://api-inference.huggingface.co"
HUGGINGFACE_MODEL: "runwayml/stable-diffusion-v1-5"
# Azure Blob Storage Configuration
AZURE_CONTAINER_NAME: "content-images"
# Logging Configuration
LOG_FILE_PATH: "logs/content-service.log"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: content-service
labels:
app: content-service
spec:
replicas: 1
selector:
matchLabels:
app: content-service
template:
metadata:
labels:
app: content-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: content-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8084
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-content-service
- secretRef:
name: secret-common
- secretRef:
name: secret-content-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /api/v1/content/actuator/health
port: 8084
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /api/v1/content/actuator/health/readiness
port: 8084
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /api/v1/content/actuator/health/liveness
port: 8084
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -1,14 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-content-service
type: Opaque
stringData:
# Azure Blob Storage Connection String
AZURE_STORAGE_CONNECTION_STRING: "DefaultEndpointsProtocol=https;AccountName=blobkteventstorage;AccountKey=tcBN7mAfojbl0uGsOpU7RNuKNhHnzmwDiWjN31liSMVSrWaEK+HHnYKZrjBXXAC6ZPsuxUDlsf8x+AStd++QYg==;EndpointSuffix=core.windows.net"
# Replicate API Token
REPLICATE_API_TOKEN: "r8_BsGCJtAg5U5kkMBXSe3pgMkPufSKnUR4NY9gJ"
# HuggingFace API Token
HUGGINGFACE_API_TOKEN: ""

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: content-service
labels:
app: content-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8084
protocol: TCP
name: http
selector:
app: content-service

View File

@ -1,28 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-distribution-service
data:
# Server Configuration
SERVER_PORT: "8085"
# Database Configuration
DB_HOST: "distribution-postgresql"
DB_PORT: "5432"
DB_NAME: "distributiondb"
DB_USERNAME: "eventuser"
# Kafka Configuration
KAFKA_ENABLED: "true"
KAFKA_CONSUMER_GROUP: "distribution-service"
# External Channel APIs
URIDONGNETV_API_URL: "http://localhost:9001/api/uridongnetv"
RINGOBIZ_API_URL: "http://localhost:9002/api/ringobiz"
GINITV_API_URL: "http://localhost:9003/api/ginitv"
INSTAGRAM_API_URL: "http://localhost:9004/api/instagram"
NAVER_API_URL: "http://localhost:9005/api/naver"
KAKAO_API_URL: "http://localhost:9006/api/kakao"
# Logging Configuration
LOG_FILE: "logs/distribution-service.log"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: distribution-service
labels:
app: distribution-service
spec:
replicas: 1
selector:
matchLabels:
app: distribution-service
template:
metadata:
labels:
app: distribution-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: distribution-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8085
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-distribution-service
- secretRef:
name: secret-common
- secretRef:
name: secret-distribution-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /api/v1/distribution/actuator/health
port: 8085
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /api/v1/distribution/actuator/health/readiness
port: 8085
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /api/v1/distribution/actuator/health/liveness
port: 8085
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -1,7 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-distribution-service
type: Opaque
stringData:
DB_PASSWORD: "Hi5Jessica!"

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: distribution-service
labels:
app: distribution-service
spec:
type: ClusterIP
selector:
app: distribution-service
ports:
- name: http
port: 80
targetPort: 8085
protocol: TCP

View File

@ -1,28 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-event-service
data:
# Server Configuration
SERVER_PORT: "8080"
# Database Configuration
DB_HOST: "event-postgresql"
DB_PORT: "5432"
DB_NAME: "eventdb"
DB_USERNAME: "eventuser"
# Redis Configuration (service-specific)
REDIS_DATABASE: "2"
# Kafka Configuration (service-specific)
KAFKA_CONSUMER_GROUP: "event-service-consumers"
# Service URLs
CONTENT_SERVICE_URL: "http://content-service"
DISTRIBUTION_SERVICE_URL: "http://distribution-service"
# Logging Configuration
LOG_LEVEL: "INFO"
SQL_LOG_LEVEL: "WARN"
LOG_FILE: "logs/event-service.log"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service
labels:
app: event-service
spec:
replicas: 1
selector:
matchLabels:
app: event-service
template:
metadata:
labels:
app: event-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: event-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-event-service
- secretRef:
name: secret-common
- secretRef:
name: secret-event-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -1,8 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-event-service
type: Opaque
stringData:
# Database Password
DB_PASSWORD: "Hi5Jessica!"

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: event-service
labels:
app: event-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: event-service

View File

@ -1,116 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kt-event-marketing
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx
rules:
- host: kt-event-marketing-api.20.214.196.128.nip.io
http:
paths:
# User Service
- path: /api/v1/users
pathType: Prefix
backend:
service:
name: user-service
port:
number: 80
# Content Service
- path: /api/v1/content
pathType: Prefix
backend:
service:
name: content-service
port:
number: 80
# Event Service
- path: /api/v1/events
pathType: Prefix
backend:
service:
name: event-service
port:
number: 80
- path: /api/v1/jobs
pathType: Prefix
backend:
service:
name: event-service
port:
number: 80
- path: /api/v1/redis-test
pathType: Prefix
backend:
service:
name: event-service
port:
number: 80
# AI Service
- path: /api/v1/ai-service
pathType: Prefix
backend:
service:
name: ai-service
port:
number: 80
# Participation Service
- path: /api/v1/participations
pathType: Prefix
backend:
service:
name: participation-service
port:
number: 80
- path: /api/v1/winners
pathType: Prefix
backend:
service:
name: participation-service
port:
number: 80
- path: /debug
pathType: Prefix
backend:
service:
name: participation-service
port:
number: 80
# Analytics Service - Event Analytics
- path: /api/v1/events/([0-9]+)/analytics
pathType: ImplementationSpecific
backend:
service:
name: analytics-service
port:
number: 80
# Analytics Service - User Analytics
- path: /api/v1/users/([0-9]+)/analytics
pathType: ImplementationSpecific
backend:
service:
name: analytics-service
port:
number: 80
# Distribution Service
- path: /distribution
pathType: Prefix
backend:
service:
name: distribution-service
port:
number: 80

View File

@ -1,71 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Common resources
resources:
# Common ConfigMaps and Secrets
- cm-common.yaml
- secret-common.yaml
- secret-imagepull.yaml
# Ingress
- ingress.yaml
# user-service
- user-service-deployment.yaml
- user-service-service.yaml
- user-service-cm-user-service.yaml
- user-service-secret-user-service.yaml
# event-service
- event-service-deployment.yaml
- event-service-service.yaml
- event-service-cm-event-service.yaml
- event-service-secret-event-service.yaml
# ai-service
- ai-service-deployment.yaml
- ai-service-service.yaml
- ai-service-cm-ai-service.yaml
- ai-service-secret-ai-service.yaml
# content-service
- content-service-deployment.yaml
- content-service-service.yaml
- content-service-cm-content-service.yaml
- content-service-secret-content-service.yaml
# distribution-service
- distribution-service-deployment.yaml
- distribution-service-service.yaml
- distribution-service-cm-distribution-service.yaml
- distribution-service-secret-distribution-service.yaml
# participation-service
- participation-service-deployment.yaml
- participation-service-service.yaml
- participation-service-cm-participation-service.yaml
- participation-service-secret-participation-service.yaml
# analytics-service
- analytics-service-deployment.yaml
- analytics-service-service.yaml
- analytics-service-cm-analytics-service.yaml
- analytics-service-secret-analytics-service.yaml
# Image tag replacement (will be overridden by overlays)
images:
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service
newTag: latest

View File

@ -1,24 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-participation-service
data:
# Server Configuration
SERVER_PORT: "8084"
# Database Configuration
DB_HOST: "participation-postgresql"
DB_PORT: "5432"
DB_NAME: "participationdb"
DB_USERNAME: "eventuser"
# Redis Configuration (service-specific)
REDIS_DATABASE: "4"
# Kafka Configuration (service-specific)
KAFKA_CONSUMER_GROUP: "participation-service-consumers"
# Logging Configuration
LOG_LEVEL: "INFO"
SHOW_SQL: "false"
LOG_FILE: "logs/participation-service.log"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: participation-service
labels:
app: participation-service
spec:
replicas: 1
selector:
matchLabels:
app: participation-service
template:
metadata:
labels:
app: participation-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: participation-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8084
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-participation-service
- secretRef:
name: secret-common
- secretRef:
name: secret-participation-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health/liveness
port: 8084
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 30
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8084
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8084
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 3

View File

@ -1,7 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-participation-service
type: Opaque
stringData:
DB_PASSWORD: "Hi5Jessica!"

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: participation-service
labels:
app: participation-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8084
protocol: TCP
name: http
selector:
app: participation-service

View File

@ -1,11 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-common
type: Opaque
stringData:
# Redis Password
REDIS_PASSWORD: "Hi5Jessica!"
# JWT Secret
JWT_SECRET: "QL0czzXckz18kHnxpaTDoWFkq+3qKO7VQXeNvf2bOoU="

View File

@ -1,16 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: kt-event-marketing
type: kubernetes.io/dockerconfigjson
stringData:
.dockerconfigjson: |
{
"auths": {
"acrdigitalgarage01.azurecr.io": {
"username": "acrdigitalgarage01",
"password": "+OY+rmOagorjWvQe/tTk6oqvnZI8SmNbY/Y2o5EDcY+ACRDCDbYk",
"auth": "YWNyZGlnaXRhbGdhcmFnZTAxOitPWStybU9hZ29yald2UWUvdFRrNm9xdm5aSThTbU5iWS9ZMm81RURjWStBQ1JEQ0RiWWs="
}
}
}

View File

@ -1,31 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-user-service
data:
# Server Configuration
SERVER_PORT: "8081"
# Database Configuration
DB_URL: "jdbc:postgresql://user-postgresql:5432/userdb"
DB_HOST: "user-postgresql"
DB_PORT: "5432"
DB_NAME: "userdb"
DB_USERNAME: "eventuser"
DB_DRIVER: "org.postgresql.Driver"
DB_KIND: "postgresql"
DB_POOL_MAX: "20"
DB_POOL_MIN: "5"
DB_CONN_TIMEOUT: "30000"
DB_IDLE_TIMEOUT: "600000"
DB_MAX_LIFETIME: "1800000"
DB_LEAK_THRESHOLD: "60000"
# Redis Configuration (service-specific)
REDIS_DATABASE: "0"
# Kafka Configuration (service-specific)
KAFKA_CONSUMER_GROUP: "user-service-consumers"
# Logging Configuration
LOG_FILE_PATH: "logs/user-service.log"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
labels:
app: user-service
spec:
replicas: 1
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: user-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8081
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-user-service
- secretRef:
name: secret-common
- secretRef:
name: secret-user-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health
port: 8081
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8081
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8081
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -1,8 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-user-service
type: Opaque
stringData:
# Database Password
DB_PASSWORD: "Hi5Jessica!"

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: user-service
labels:
app: user-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
selector:
app: user-service

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-service
spec:
replicas: 1
template:
spec:
containers:
- name: ai-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: analytics-service
spec:
replicas: 1
template:
spec:
containers:
- name: analytics-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: content-service
spec:
replicas: 1
template:
spec:
containers:
- name: content-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: distribution-service
spec:
replicas: 1
template:
spec:
containers:
- name: distribution-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service
spec:
replicas: 1
template:
spec:
containers:
- name: event-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,34 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kt-event-marketing
bases:
- ../../base
# Environment-specific patches
patchesStrategicMerge:
- user-service-patch.yaml
- event-service-patch.yaml
- ai-service-patch.yaml
- content-service-patch.yaml
- distribution-service-patch.yaml
- participation-service-patch.yaml
- analytics-service-patch.yaml
# Override image tags for dev environment
images:
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service
newTag: dev

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: participation-service
spec:
replicas: 1
template:
spec:
containers:
- name: participation-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 1
template:
spec:
containers:
- name: user-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-service
spec:
replicas: 3
template:
spec:
containers:
- name: ai-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: analytics-service
spec:
replicas: 3
template:
spec:
containers:
- name: analytics-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: content-service
spec:
replicas: 3
template:
spec:
containers:
- name: content-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: distribution-service
spec:
replicas: 3
template:
spec:
containers:
- name: distribution-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service
spec:
replicas: 3
template:
spec:
containers:
- name: event-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,38 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kt-event-marketing
bases:
- ../../base
# Environment-specific labels
commonLabels:
environment: prod
# Environment-specific patches
patchesStrategicMerge:
- user-service-patch.yaml
- event-service-patch.yaml
- ai-service-patch.yaml
- content-service-patch.yaml
- distribution-service-patch.yaml
- participation-service-patch.yaml
- analytics-service-patch.yaml
# Override image tags for prod environment
images:
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service
newTag: prod

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: participation-service
spec:
replicas: 3
template:
spec:
containers:
- name: participation-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 3
template:
spec:
containers:
- name: user-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-service
spec:
replicas: 2
template:
spec:
containers:
- name: ai-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: analytics-service
spec:
replicas: 2
template:
spec:
containers:
- name: analytics-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: content-service
spec:
replicas: 2
template:
spec:
containers:
- name: content-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: distribution-service
spec:
replicas: 2
template:
spec:
containers:
- name: distribution-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service
spec:
replicas: 2
template:
spec:
containers:
- name: event-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,38 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kt-event-marketing
bases:
- ../../base
# Environment-specific labels
commonLabels:
environment: staging
# Environment-specific patches
patchesStrategicMerge:
- user-service-patch.yaml
- event-service-patch.yaml
- ai-service-patch.yaml
- content-service-patch.yaml
- distribution-service-patch.yaml
- participation-service-patch.yaml
- analytics-service-patch.yaml
# Override image tags for staging environment
images:
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service
newTag: staging

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: participation-service
spec:
replicas: 2
template:
spec:
containers:
- name: participation-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 2
template:
spec:
containers:
- name: user-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,79 +0,0 @@
#!/usr/bin/env python3
"""
Copy K8s manifests to Kustomize base directory and remove namespace declarations
"""
import os
import shutil
import yaml
from pathlib import Path
# Service names
SERVICES = [
'user-service',
'event-service',
'ai-service',
'content-service',
'distribution-service',
'participation-service',
'analytics-service'
]
# Base directories
SOURCE_DIR = Path('deployment/k8s')
BASE_DIR = Path('.github/kustomize/base')
def remove_namespace_from_yaml(content):
"""Remove namespace field from YAML content"""
docs = list(yaml.safe_load_all(content))
for doc in docs:
if doc and isinstance(doc, dict):
if 'metadata' in doc and 'namespace' in doc['metadata']:
del doc['metadata']['namespace']
return yaml.dump_all(docs, default_flow_style=False, sort_keys=False)
def copy_and_process_file(source_path, dest_path):
"""Copy file and remove namespace declaration"""
with open(source_path, 'r', encoding='utf-8') as f:
content = f.read()
# Remove namespace from YAML
processed_content = remove_namespace_from_yaml(content)
# Write to destination
dest_path.parent.mkdir(parents=True, exist_ok=True)
with open(dest_path, 'w', encoding='utf-8') as f:
f.write(processed_content)
print(f"✓ Copied and processed: {source_path} -> {dest_path}")
def main():
print("Starting manifest copy to Kustomize base...")
# Copy common resources
print("\n[Common Resources]")
common_dir = SOURCE_DIR / 'common'
for file in ['cm-common.yaml', 'secret-common.yaml', 'secret-imagepull.yaml', 'ingress.yaml']:
source = common_dir / file
if source.exists():
dest = BASE_DIR / file
copy_and_process_file(source, dest)
# Copy service-specific resources
print("\n[Service Resources]")
for service in SERVICES:
service_dir = SOURCE_DIR / service
if not service_dir.exists():
print(f"⚠ Service directory not found: {service_dir}")
continue
print(f"\nProcessing {service}...")
for file in service_dir.glob('*.yaml'):
dest = BASE_DIR / f"{service}-{file.name}"
copy_and_process_file(file, dest)
print("\n✅ All manifests copied to base directory!")
if __name__ == '__main__':
main()

View File

@ -1,181 +0,0 @@
#!/bin/bash
set -e
###############################################################################
# Backend Services Deployment Script for AKS
#
# Usage:
# ./deploy.sh <environment> [service-name]
#
# Arguments:
# environment - Target environment (dev, staging, prod)
# service-name - Specific service to deploy (optional, deploys all if not specified)
#
# Examples:
# ./deploy.sh dev # Deploy all services to dev
# ./deploy.sh prod user-service # Deploy only user-service to prod
###############################################################################
# Color output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Functions
log_info() {
echo -e "${GREEN}[INFO]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Validate arguments
if [ $# -lt 1 ]; then
log_error "Usage: $0 <environment> [service-name]"
log_error "Environment must be one of: dev, staging, prod"
exit 1
fi
ENVIRONMENT=$1
SERVICE=${2:-all}
# Validate environment
if [[ ! "$ENVIRONMENT" =~ ^(dev|staging|prod)$ ]]; then
log_error "Invalid environment: $ENVIRONMENT"
log_error "Must be one of: dev, staging, prod"
exit 1
fi
# Load environment variables
ENV_FILE=".github/config/deploy_env_vars_${ENVIRONMENT}"
if [ ! -f "$ENV_FILE" ]; then
log_error "Environment file not found: $ENV_FILE"
exit 1
fi
source "$ENV_FILE"
log_info "Loaded environment configuration: $ENVIRONMENT"
# Service list
SERVICES=(
"user-service"
"event-service"
"ai-service"
"content-service"
"distribution-service"
"participation-service"
"analytics-service"
)
# Validate service if specified
if [ "$SERVICE" != "all" ]; then
if [[ ! " ${SERVICES[@]} " =~ " ${SERVICE} " ]]; then
log_error "Invalid service: $SERVICE"
log_error "Must be one of: ${SERVICES[*]}"
exit 1
fi
SERVICES=("$SERVICE")
fi
log_info "Services to deploy: ${SERVICES[*]}"
# Check prerequisites
log_info "Checking prerequisites..."
if ! command -v az &> /dev/null; then
log_error "Azure CLI not found. Please install Azure CLI."
exit 1
fi
if ! command -v kubectl &> /dev/null; then
log_error "kubectl not found. Please install kubectl."
exit 1
fi
if ! command -v kustomize &> /dev/null; then
log_error "kustomize not found. Please install kustomize."
exit 1
fi
# Azure login check
log_info "Checking Azure authentication..."
if ! az account show &> /dev/null; then
log_error "Not logged in to Azure. Please run 'az login'"
exit 1
fi
# Get AKS credentials
log_info "Getting AKS credentials..."
az aks get-credentials \
--resource-group "$RESOURCE_GROUP" \
--name "$AKS_CLUSTER" \
--overwrite-existing
# Check namespace
log_info "Checking namespace: $NAMESPACE"
if ! kubectl get namespace "$NAMESPACE" &> /dev/null; then
log_warn "Namespace $NAMESPACE does not exist. Creating..."
kubectl create namespace "$NAMESPACE"
fi
# Build and deploy with Kustomize
OVERLAY_DIR=".github/kustomize/overlays/${ENVIRONMENT}"
if [ ! -d "$OVERLAY_DIR" ]; then
log_error "Kustomize overlay directory not found: $OVERLAY_DIR"
exit 1
fi
log_info "Building Kustomize manifests for $ENVIRONMENT..."
cd "$OVERLAY_DIR"
# Update image tags
log_info "Updating image tags to: $ENVIRONMENT"
kustomize edit set image \
${ACR_NAME}.azurecr.io/kt-event-marketing/user-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/event-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/ai-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/content-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/distribution-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/participation-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/analytics-service:${ENVIRONMENT}
# Apply manifests
log_info "Applying manifests to AKS..."
kustomize build . | kubectl apply -f -
cd - > /dev/null
# Wait for deployments
log_info "Waiting for deployments to be ready..."
for service in "${SERVICES[@]}"; do
log_info "Waiting for $service deployment..."
if ! kubectl rollout status deployment/"$service" -n "$NAMESPACE" --timeout=5m; then
log_error "Deployment of $service failed!"
exit 1
fi
log_info "$service is ready"
done
# Verify deployment
log_info "Verifying deployment..."
echo ""
echo "=== Pods Status ==="
kubectl get pods -n "$NAMESPACE" -l app.kubernetes.io/part-of=kt-event-marketing
echo ""
echo "=== Services ==="
kubectl get svc -n "$NAMESPACE"
echo ""
echo "=== Ingress ==="
kubectl get ingress -n "$NAMESPACE"
log_info "Deployment completed successfully!"
log_info "Environment: $ENVIRONMENT"
log_info "Services: ${SERVICES[*]}"

View File

@ -1,51 +0,0 @@
#!/bin/bash
SERVICES=(user-service event-service ai-service content-service distribution-service participation-service analytics-service)
# Staging patches (2 replicas, increased resources)
for service in "${SERVICES[@]}"; do
cat > ".github/kustomize/overlays/staging/${service}-patch.yaml" << YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${service}
spec:
replicas: 2
template:
spec:
containers:
- name: ${service}
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"
YAML
done
# Prod patches (3 replicas, maximum resources)
for service in "${SERVICES[@]}"; do
cat > ".github/kustomize/overlays/prod/${service}-patch.yaml" << YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${service}
spec:
replicas: 3
template:
spec:
containers:
- name: ${service}
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"
YAML
done
echo "✅ Generated all patch files for staging and prod"

View File

@ -1,207 +0,0 @@
name: Backend CI/CD Pipeline
on:
# push:
# branches:
# - develop
# - main
# paths:
# - '*-service/**'
# - '.github/workflows/backend-cicd.yaml'
# - '.github/kustomize/**'
pull_request:
branches:
- develop
- main
paths:
- '*-service/**'
workflow_dispatch:
inputs:
environment:
description: 'Target environment'
required: true
type: choice
options:
- dev
- staging
- prod
service:
description: 'Service to deploy (all for all services)'
required: true
default: 'all'
env:
ACR_NAME: acrdigitalgarage01
RESOURCE_GROUP: rg-digitalgarage-01
AKS_CLUSTER: aks-digitalgarage-01
NAMESPACE: kt-event-marketing
JDK_VERSION: '21'
jobs:
detect-changes:
name: Detect Changed Services
runs-on: ubuntu-latest
outputs:
services: ${{ steps.detect.outputs.services }}
environment: ${{ steps.env.outputs.environment }}
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Determine environment
id: env
run: |
if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then
echo "environment=${{ github.event.inputs.environment }}" >> $GITHUB_OUTPUT
elif [ "${{ github.ref }}" = "refs/heads/main" ]; then
echo "environment=prod" >> $GITHUB_OUTPUT
elif [ "${{ github.ref }}" = "refs/heads/develop" ]; then
echo "environment=dev" >> $GITHUB_OUTPUT
else
echo "environment=dev" >> $GITHUB_OUTPUT
fi
- name: Detect changed services
id: detect
run: |
if [ "${{ github.event_name }}" = "workflow_dispatch" ] && [ "${{ github.event.inputs.service }}" != "all" ]; then
echo "services=[\"${{ github.event.inputs.service }}\"]" >> $GITHUB_OUTPUT
elif [ "${{ github.event_name }}" = "workflow_dispatch" ] && [ "${{ github.event.inputs.service }}" = "all" ]; then
echo "services=[\"user-service\",\"event-service\",\"ai-service\",\"content-service\",\"distribution-service\",\"participation-service\",\"analytics-service\"]" >> $GITHUB_OUTPUT
else
CHANGED_SERVICES=$(git diff --name-only ${{ github.event.before }} ${{ github.sha }} | \
grep -E '^(user|event|ai|content|distribution|participation|analytics)-service/' | \
cut -d'/' -f1 | sort -u | \
jq -R -s -c 'split("\n") | map(select(length > 0))')
if [ "$CHANGED_SERVICES" = "[]" ] || [ -z "$CHANGED_SERVICES" ]; then
echo "services=[\"user-service\",\"event-service\",\"ai-service\",\"content-service\",\"distribution-service\",\"participation-service\",\"analytics-service\"]" >> $GITHUB_OUTPUT
else
echo "services=$CHANGED_SERVICES" >> $GITHUB_OUTPUT
fi
fi
build-and-push:
name: Build and Push - ${{ matrix.service }}
needs: detect-changes
runs-on: ubuntu-latest
strategy:
matrix:
service: ${{ fromJson(needs.detect-changes.outputs.services) }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up JDK ${{ env.JDK_VERSION }}
uses: actions/setup-java@v4
with:
java-version: ${{ env.JDK_VERSION }}
distribution: 'temurin'
cache: 'gradle'
- name: Grant execute permission for gradlew
run: chmod +x gradlew
- name: Build with Gradle
run: ./gradlew ${{ matrix.service }}:build -x test
# - name: Run tests
# run: ./gradlew ${{ matrix.service }}:test
- name: Build JAR
run: ./gradlew ${{ matrix.service }}:bootJar
- name: Log in to Azure Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.ACR_NAME }}.azurecr.io
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: ./${{ matrix.service }}
file: ./${{ matrix.service }}/Dockerfile
push: true
tags: |
${{ env.ACR_NAME }}.azurecr.io/kt-event-marketing/${{ matrix.service }}:${{ needs.detect-changes.outputs.environment }}
${{ env.ACR_NAME }}.azurecr.io/kt-event-marketing/${{ matrix.service }}:${{ github.sha }}
${{ env.ACR_NAME }}.azurecr.io/kt-event-marketing/${{ matrix.service }}:latest
deploy:
name: Deploy to AKS - ${{ needs.detect-changes.outputs.environment }}
needs: [detect-changes, build-and-push]
runs-on: ubuntu-latest
environment: ${{ needs.detect-changes.outputs.environment }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Azure login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Get AKS credentials
run: |
az aks get-credentials \
--resource-group ${{ env.RESOURCE_GROUP }} \
--name ${{ env.AKS_CLUSTER }} \
--overwrite-existing
- name: Setup Kustomize
uses: imranismail/setup-kustomize@v2
- name: Deploy with Kustomize
run: |
cd .github/kustomize/overlays/${{ needs.detect-changes.outputs.environment }}
kustomize edit set image \
acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service:${{ needs.detect-changes.outputs.environment }}
kustomize build . | kubectl apply -f -
- name: Wait for deployment rollout
run: |
for service in $(echo '${{ needs.detect-changes.outputs.services }}' | jq -r '.[]'); do
echo "Waiting for ${service} deployment..."
kubectl rollout status deployment/${service} -n ${{ env.NAMESPACE }} --timeout=5m
done
- name: Verify deployment
run: |
echo "=== Pods Status ==="
kubectl get pods -n ${{ env.NAMESPACE }} -l app.kubernetes.io/part-of=kt-event-marketing
echo "=== Services ==="
kubectl get svc -n ${{ env.NAMESPACE }}
echo "=== Ingress ==="
kubectl get ingress -n ${{ env.NAMESPACE }}
notify:
name: Notify Deployment Result
needs: [detect-changes, deploy]
runs-on: ubuntu-latest
if: always()
steps:
- name: Deployment Success
if: needs.deploy.result == 'success'
run: |
echo "✅ Deployment to ${{ needs.detect-changes.outputs.environment }} succeeded!"
echo "Services: ${{ needs.detect-changes.outputs.services }}"
- name: Deployment Failure
if: needs.deploy.result == 'failure'
run: |
echo "❌ Deployment to ${{ needs.detect-changes.outputs.environment }} failed!"
echo "Services: ${{ needs.detect-changes.outputs.services }}"
exit 1

2
.gitignore vendored
View File

@ -61,5 +61,3 @@ k8s/**/*-local.yaml
# Gradle (로컬 환경 설정) # Gradle (로컬 환경 설정)
gradle.properties gradle.properties
*.hprof
test-data.json

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

View File

@ -1,31 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="AiServiceApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot" nameIsGenerated="true">
<option name="ACTIVE_PROFILES" />
<module name="kt-event-marketing.ai-service.main" />
<option name="SPRING_BOOT_MAIN_CLASS" value="com.kt.ai.AiApplication" />
<extension name="coverage">
<pattern>
<option name="PATTERN" value="com.kt.ai.*" />
<option name="ENABLED" value="true" />
</pattern>
</extension>
<envs>
<env name="SERVER_PORT" value="8081" />
<env name="DB_HOST" value="4.230.112.141" />
<env name="DB_PORT" value="5432" />
<env name="DB_NAME" value="aidb" />
<env name="DB_USERNAME" value="eventuser" />
<env name="DB_PASSWORD" value="Hi5Jessica!" />
<env name="REDIS_HOST" value="20.214.210.71" />
<env name="REDIS_PORT" value="6379" />
<env name="REDIS_PASSWORD" value="Hi5Jessica!" />
<env name="KAFKA_BOOTSTRAP_SERVERS" value="20.249.182.13:9095,4.217.131.59:9095" />
<env name="KAFKA_CONSUMER_GROUP" value="ai" />
<env name="JPA_DDL_AUTO" value="update" />
<env name="JPA_SHOW_SQL" value="false" />
</envs>
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component>

View File

@ -1,31 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="AnalyticsServiceApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot" nameIsGenerated="true">
<option name="ACTIVE_PROFILES" />
<module name="kt-event-marketing.analytics-service.main" />
<option name="SPRING_BOOT_MAIN_CLASS" value="com.kt.analytics.AnalyticsApplication" />
<extension name="coverage">
<pattern>
<option name="PATTERN" value="com.kt.analytics.*" />
<option name="ENABLED" value="true" />
</pattern>
</extension>
<envs>
<env name="SERVER_PORT" value="8087" />
<env name="DB_HOST" value="4.230.49.9" />
<env name="DB_PORT" value="5432" />
<env name="DB_NAME" value="analyticdb" />
<env name="DB_USERNAME" value="eventuser" />
<env name="DB_PASSWORD" value="Hi5Jessica!" />
<env name="REDIS_HOST" value="20.214.210.71" />
<env name="REDIS_PORT" value="6379" />
<env name="REDIS_PASSWORD" value="Hi5Jessica!" />
<env name="KAFKA_BOOTSTRAP_SERVERS" value="4.230.50.63:9092" />
<env name="KAFKA_CONSUMER_GROUP" value="analytic" />
<env name="JPA_DDL_AUTO" value="update" />
<env name="JPA_SHOW_SQL" value="false" />
</envs>
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component>

View File

@ -1,31 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="ContentServiceApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot" nameIsGenerated="true">
<option name="ACTIVE_PROFILES" />
<module name="kt-event-marketing.content-service.main" />
<option name="SPRING_BOOT_MAIN_CLASS" value="com.kt.content.ContentApplication" />
<extension name="coverage">
<pattern>
<option name="PATTERN" value="com.kt.content.*" />
<option name="ENABLED" value="true" />
</pattern>
</extension>
<envs>
<env name="SERVER_PORT" value="8084" />
<env name="DB_HOST" value="4.217.131.139" />
<env name="DB_PORT" value="5432" />
<env name="DB_NAME" value="contentdb" />
<env name="DB_USERNAME" value="eventuser" />
<env name="DB_PASSWORD" value="Hi5Jessica!" />
<env name="REDIS_HOST" value="20.214.210.71" />
<env name="REDIS_PORT" value="6379" />
<env name="REDIS_PASSWORD" value="Hi5Jessica!" />
<env name="JPA_DDL_AUTO" value="update" />
<env name="JPA_SHOW_SQL" value="false" />
<env name="REPLICATE_API_TOKEN" value="r8_cqE8IzQr9DZ8Dr72ozbomiXe6IFPL0005Vuq9" />
<env name="REPLICATE_MOCK_ENABLED" value="true" />
</envs>
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component>

View File

@ -1,36 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="DistributionServiceApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot" nameIsGenerated="true">
<option name="ACTIVE_PROFILES" />
<module name="kt-event-marketing.distribution-service.main" />
<option name="SPRING_BOOT_MAIN_CLASS" value="com.kt.distribution.DistributionApplication" />
<extension name="coverage">
<pattern>
<option name="PATTERN" value="com.kt.distribution.*" />
<option name="ENABLED" value="true" />
</pattern>
</extension>
<envs>
<env name="SERVER_PORT" value="8085" />
<env name="DB_HOST" value="4.217.133.59" />
<env name="DB_PORT" value="5432" />
<env name="DB_NAME" value="distributiondb" />
<env name="DB_USERNAME" value="eventuser" />
<env name="DB_PASSWORD" value="Hi5Jessica!" />
<env name="REDIS_HOST" value="20.214.210.71" />
<env name="REDIS_PORT" value="6379" />
<env name="REDIS_PASSWORD" value="Hi5Jessica!" />
<env name="KAFKA_BOOTSTRAP_SERVERS" value="4.230.50.63:9092" />
<env name="KAFKA_CONSUMER_GROUP" value="distribution-service" />
<env name="JPA_DDL_AUTO" value="update" />
<env name="JPA_SHOW_SQL" value="false" />
<env name="NAVER_BLOG_USERNAME" value="" />
<env name="NAVER_BLOG_PASSWORD" value="" />
<env name="NAVER_BLOG_BLOG_ID" value="" />
<env name="NAVER_BLOG_HEADLESS" value="false" />
<env name="NAVER_BLOG_SESSION_PATH" value="playwright-sessions" />
</envs>
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component>

View File

@ -1,31 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="EventServiceApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot" nameIsGenerated="true">
<option name="ACTIVE_PROFILES" />
<module name="kt-event-marketing.event-service.main" />
<option name="SPRING_BOOT_MAIN_CLASS" value="com.kt.event.EventApplication" />
<extension name="coverage">
<pattern>
<option name="PATTERN" value="com.kt.event.*" />
<option name="ENABLED" value="true" />
</pattern>
</extension>
<envs>
<env name="SERVER_PORT" value="8082" />
<env name="DB_HOST" value="20.249.177.232" />
<env name="DB_PORT" value="5432" />
<env name="DB_NAME" value="eventdb" />
<env name="DB_USERNAME" value="eventuser" />
<env name="DB_PASSWORD" value="Hi5Jessica!" />
<env name="REDIS_HOST" value="20.214.210.71" />
<env name="REDIS_PORT" value="6379" />
<env name="REDIS_PASSWORD" value="Hi5Jessica!" />
<env name="KAFKA_BOOTSTRAP_SERVERS" value="4.230.50.63:9092" />
<env name="DISTRIBUTION_SERVICE_URL" value="http://localhost:8085" />
<env name="JPA_DDL_AUTO" value="update" />
<env name="JPA_SHOW_SQL" value="false" />
</envs>
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component>

View File

@ -1,29 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="UserServiceApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot" nameIsGenerated="true">
<option name="ACTIVE_PROFILES" />
<module name="kt-event-marketing.user-service.main" />
<option name="SPRING_BOOT_MAIN_CLASS" value="com.kt.user.UserApplication" />
<extension name="coverage">
<pattern>
<option name="PATTERN" value="com.kt.user.*" />
<option name="ENABLED" value="true" />
</pattern>
</extension>
<envs>
<env name="SERVER_PORT" value="8083" />
<env name="DB_HOST" value="20.249.125.115" />
<env name="DB_PORT" value="5432" />
<env name="DB_NAME" value="userdb" />
<env name="DB_USERNAME" value="eventuser" />
<env name="DB_PASSWORD" value="Hi5Jessica!" />
<env name="REDIS_HOST" value="20.214.210.71" />
<env name="REDIS_PORT" value="6379" />
<env name="REDIS_PASSWORD" value="Hi5Jessica!" />
<env name="JPA_DDL_AUTO" value="update" />
<env name="JPA_SHOW_SQL" value="false" />
</envs>
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component>

View File

@ -1,84 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="analytics-service" type="GradleRunConfiguration" factoryName="Gradle">
<ExternalSystemSettings>
<option name="env">
<map>
<!-- Database Configuration -->
<entry key="DB_KIND" value="postgresql" />
<entry key="DB_HOST" value="4.230.49.9" />
<entry key="DB_PORT" value="5432" />
<entry key="DB_NAME" value="analyticdb" />
<entry key="DB_USERNAME" value="eventuser" />
<entry key="DB_PASSWORD" value="Hi5Jessica!" />
<!-- JPA Configuration -->
<entry key="DDL_AUTO" value="create" />
<entry key="SHOW_SQL" value="true" />
<!-- Redis Configuration -->
<entry key="REDIS_HOST" value="20.214.210.71" />
<entry key="REDIS_PORT" value="6379" />
<entry key="REDIS_PASSWORD" value="Hi5Jessica!" />
<entry key="REDIS_DATABASE" value="5" />
<!-- Kafka Configuration (원격 서버) -->
<entry key="KAFKA_ENABLED" value="true" />
<entry key="KAFKA_BOOTSTRAP_SERVERS" value="20.249.182.13:9095,4.217.131.59:9095" />
<entry key="KAFKA_CONSUMER_GROUP_ID" value="analytics-service-consumers-v3" />
<!-- Sample Data Configuration (MVP Only) -->
<!-- ⚠️ Kafka Producer로 이벤트 발행 (Consumer가 처리) -->
<entry key="SAMPLE_DATA_ENABLED" value="true" />
<!-- Server Configuration -->
<entry key="SERVER_PORT" value="8086" />
<!-- JWT Configuration -->
<entry key="JWT_SECRET" value="dev-jwt-secret-key-for-development-only-kt-event-marketing" />
<entry key="JWT_ACCESS_TOKEN_VALIDITY" value="1800" />
<entry key="JWT_REFRESH_TOKEN_VALIDITY" value="86400" />
<!-- CORS Configuration -->
<entry key="CORS_ALLOWED_ORIGINS" value="http://localhost:*" />
<!-- Logging Configuration -->
<entry key="LOG_FILE" value="logs/analytics-service.log" />
<entry key="LOG_LEVEL_APP" value="DEBUG" />
<entry key="LOG_LEVEL_WEB" value="INFO" />
<entry key="LOG_LEVEL_SQL" value="DEBUG" />
<entry key="LOG_LEVEL_SQL_TYPE" value="TRACE" />
</map>
</option>
<option name="executionName" />
<option name="externalProjectPath" value="$PROJECT_DIR$" />
<option name="externalSystemIdString" value="GRADLE" />
<option name="scriptParameters" value="" />
<option name="taskDescriptions">
<list />
</option>
<option name="taskNames">
<list>
<option value="analytics-service:bootRun" />
</list>
</option>
<option name="vmOptions" />
</ExternalSystemSettings>
<ExternalSystemDebugServerProcess>true</ExternalSystemDebugServerProcess>
<ExternalSystemReattachDebugProcess>true</ExternalSystemReattachDebugProcess>
<EXTENSION ID="com.intellij.execution.ExternalSystemRunConfigurationJavaExtension">
<extension name="net.ashald.envfile">
<option name="IS_ENABLED" value="false" />
<option name="IS_SUBST" value="false" />
<option name="IS_PATH_MACRO_SUPPORTED" value="false" />
<option name="IS_IGNORE_MISSING_FILES" value="false" />
<option name="IS_ENABLE_EXPERIMENTAL_INTEGRATIONS" value="false" />
<ENTRIES>
<ENTRY IS_ENABLED="true" PARSER="runconfig" IS_EXECUTABLE="false" />
</ENTRIES>
</extension>
</EXTENSION>
<DebugAllEnabled>false</DebugAllEnabled>
<RunAsTest>false</RunAsTest>
<method v="2" />
</configuration>
</component>

View File

@ -1,620 +0,0 @@
# Develop 브랜치 변경사항 요약
**업데이트 일시**: 2025-10-30
**머지 브랜치**: feature/event → develop
**머지 커밋**: 3465a35
---
## 📊 변경사항 통계
```
60개 파일 변경
+2,795 줄 추가
-222 줄 삭제
```
---
## 🎯 주요 변경사항
### 1. 비즈니스 친화적 ID 생성 시스템 구현
#### EventId 생성 로직
**파일**: `event-service/.../EventIdGenerator.java` (신규)
**ID 포맷**: `EVT-{store_id}-{timestamp}-{random}`
```
예시: EVT-str_dev_test_001-20251030001311-70eea424
```
**특징**:
- ✅ 비즈니스 의미를 담은 접두사 (EVT)
- ✅ 매장 식별자 포함 (store_id)
- ✅ 타임스탬프 기반 시간 추적 가능
- ✅ 랜덤 해시로 유일성 보장
- ✅ 사람이 읽기 쉬운 형식
**구현 내역**:
```java
public class EventIdGenerator {
private static final String PREFIX = "EVT";
public static String generate(String storeId) {
String cleanStoreId = sanitizeStoreId(storeId);
String timestamp = LocalDateTime.now()
.format(DateTimeFormatter.ofPattern("yyyyMMddHHmmss"));
String randomHash = UUID.randomUUID().toString()
.substring(0, 8);
return String.format("%s-%s-%s-%s",
PREFIX, cleanStoreId, timestamp, randomHash);
}
}
```
#### JobId 생성 로직
**파일**: `event-service/.../JobIdGenerator.java` (신규)
**ID 포맷**: `JOB-{type}-{timestamp}-{random}`
```
예시: JOB-IMG-1761750847428-b88d2f54
```
**타입 코드**:
- `IMG`: 이미지 생성 작업
- `AI`: AI 추천 작업
- `REG`: 이미지 재생성 작업
**특징**:
- ✅ 작업 타입 식별 가능
- ✅ 타임스탬프로 작업 시간 추적
- ✅ UUID 기반 유일성 보장
- ✅ 로그 분석 및 디버깅 용이
---
### 2. Kafka 메시지 구조 개선
#### 필드명 표준화 (snake_case → camelCase)
**변경 파일**:
- `AIEventGenerationJobMessage.java`
- `EventCreatedMessage.java`
- `ImageJobKafkaProducer.java`
- `AIJobKafkaProducer.java`
- 관련 Consumer 클래스들
**Before**:
```json
{
"job_id": "...",
"event_id": "...",
"store_id": "...",
"store_name": "..."
}
```
**After**:
```json
{
"jobId": "...",
"eventId": "...",
"storeId": "...",
"storeName": "..."
}
```
**이점**:
- ✅ Java 네이밍 컨벤션 준수
- ✅ JSON 직렬화/역직렬화 간소화
- ✅ 프론트엔드와 일관된 필드명
- ✅ 코드 가독성 향상
**영향받는 메시지**:
1. **이미지 생성 작업 메시지** (`image-generation-job`)
- jobId, eventId, prompt, styles, platforms 등
2. **AI 이벤트 생성 작업 메시지** (`ai-event-generation-job`)
- jobId, eventId, objective, storeInfo 등
3. **이벤트 생성 완료 메시지** (`event-created`)
- eventId, storeId, storeName, objective 등
---
### 3. 데이터베이스 스키마 및 마이그레이션
#### 신규 스키마 파일
**파일**: `develop/database/schema/create_event_tables.sql`
**테이블 구조**:
```sql
-- events 테이블
CREATE TABLE events (
id VARCHAR(100) PRIMARY KEY, -- EVT-{store_id}-{timestamp}-{hash}
user_id VARCHAR(50) NOT NULL,
store_id VARCHAR(50) NOT NULL,
store_name VARCHAR(200),
objective VARCHAR(50),
status VARCHAR(20),
created_at TIMESTAMP,
updated_at TIMESTAMP
);
-- jobs 테이블
CREATE TABLE jobs (
id VARCHAR(100) PRIMARY KEY, -- JOB-{type}-{timestamp}-{hash}
event_id VARCHAR(100),
job_type VARCHAR(50),
status VARCHAR(20),
progress INTEGER,
result_message TEXT,
error_message TEXT,
created_at TIMESTAMP,
updated_at TIMESTAMP
);
-- ai_recommendations 테이블
CREATE TABLE ai_recommendations (
id BIGSERIAL PRIMARY KEY,
event_id VARCHAR(100),
recommendation_text TEXT,
-- ... 기타 필드
);
-- generated_images 테이블
CREATE TABLE generated_images (
id BIGSERIAL PRIMARY KEY,
event_id VARCHAR(100),
image_url TEXT,
style VARCHAR(50),
platform VARCHAR(50),
-- ... 기타 필드
);
```
#### 마이그레이션 스크립트
**파일**: `develop/database/migration/alter_event_id_to_varchar.sql`
**목적**: 기존 BIGINT 타입의 ID를 VARCHAR로 변경
```sql
-- Step 1: 백업 테이블 생성
CREATE TABLE events_backup AS SELECT * FROM events;
CREATE TABLE jobs_backup AS SELECT * FROM jobs;
-- Step 2: 기존 테이블 삭제
DROP TABLE IF EXISTS events CASCADE;
DROP TABLE IF EXISTS jobs CASCADE;
-- Step 3: 새 스키마로 테이블 재생성
-- (create_event_tables.sql 실행)
-- Step 4: 데이터 마이그레이션
-- (필요시 기존 데이터를 새 형식으로 변환하여 삽입)
```
**주의사항**:
- ⚠️ 프로덕션 환경에서는 반드시 백업 후 실행
- ⚠️ 외래 키 제약조건 재설정 필요
- ⚠️ 애플리케이션 코드와 동시 배포 필요
---
### 4. Content Service 통합 및 개선
#### Content Service 설정 업데이트
**파일**: `content-service/src/main/resources/application.yml`
**변경사항**:
```yaml
# JWT 설정 추가
jwt:
secret: ${JWT_SECRET:kt-event-marketing-jwt-secret...}
access-token-validity: ${JWT_ACCESS_TOKEN_VALIDITY:3600000}
# Azure Blob Storage 설정 추가
azure:
storage:
connection-string: ${AZURE_STORAGE_CONNECTION_STRING:...}
container-name: ${AZURE_CONTAINER_NAME:content-images}
```
#### 서비스 개선사항
**파일**: `content-service/.../RegenerateImageService.java`, `StableDiffusionImageGenerator.java`
**주요 개선**:
- ✅ 이미지 재생성 로직 추가 (28줄)
- ✅ Stable Diffusion 통합 개선 (28줄)
- ✅ Mock Mode 개선 (개발 환경)
- ✅ 에러 처리 강화
---
### 5. Event Service 리팩토링
#### DTO 구조 개선
**변경 파일**:
- Request DTO: `AiRecommendationRequest`, `SelectImageRequest`
- Response DTO: `EventCreatedResponse`, `EventDetailResponse`
- Kafka DTO: 모든 메시지 클래스
**주요 변경**:
1. **필드명 표준화**: snake_case → camelCase
2. **ID 타입 변경**: Long → String
3. **Nullable 필드 명시**: @Nullable 어노테이션 추가
4. **Validation 강화**: @NotNull, @NotBlank
#### Service Layer 개선
**파일**: `EventService.java`, `JobService.java`
**Before**:
```java
public EventCreatedResponse createEvent(CreateEventRequest request) {
Event event = new Event();
event.setId(generateSequentialId()); // Long 타입
// ...
}
```
**After**:
```java
public EventCreatedResponse createEvent(CreateEventRequest request) {
String eventId = EventIdGenerator.generate(request.getStoreId());
Event event = Event.builder()
.id(eventId) // String 타입
.storeId(request.getStoreId())
// ...
.build();
}
```
**개선사항**:
- ✅ EventIdGenerator 사용
- ✅ Builder 패턴 적용
- ✅ 비즈니스 로직 분리
- ✅ 에러 처리 개선
---
### 6. Kafka 연동 개선
#### Producer 개선
**파일**: `AIJobKafkaProducer.java`, `ImageJobKafkaProducer.java`
**주요 개선**:
```java
@Service
@RequiredArgsConstructor
@Slf4j
public class ImageJobKafkaProducer {
public void sendImageGenerationJob(ImageGenerationJobMessage message) {
log.info("이미지 생성 작업 메시지 발행 시작 - JobId: {}",
message.getJobId());
kafkaTemplate.send(topicName, message.getJobId(), message)
.whenComplete((result, ex) -> {
if (ex != null) {
log.error("메시지 발행 실패: {}", ex.getMessage());
} else {
log.info("메시지 발행 성공 - Offset: {}",
result.getRecordMetadata().offset());
}
});
}
}
```
**개선사항**:
- ✅ 상세한 로깅 추가
- ✅ 비동기 콜백 처리
- ✅ 에러 핸들링 강화
- ✅ 메시지 키 설정 (jobId)
#### Consumer 개선
**파일**: `ImageJobKafkaConsumer.java`, `AIJobKafkaConsumer.java`
**주요 개선**:
```java
@KafkaListener(
topics = "${app.kafka.topics.image-generation-job}",
groupId = "${spring.kafka.consumer.group-id}"
)
public void consumeImageJob(
@Payload ImageGenerationJobMessage message,
Acknowledgment ack
) {
log.info("이미지 작업 메시지 수신 - JobId: {}", message.getJobId());
try {
// 메시지 처리
processImageJob(message);
// Manual Acknowledgment
ack.acknowledge();
log.info("메시지 처리 완료 - JobId: {}", message.getJobId());
} catch (Exception e) {
log.error("메시지 처리 실패: {}", e.getMessage());
// 재시도 로직 또는 DLQ 전송
}
}
```
**개선사항**:
- ✅ Manual Acknowledgment 패턴
- ✅ 상세한 로깅
- ✅ 예외 처리 강화
- ✅ 메시지 재시도 메커니즘
---
### 7. 보안 및 인증 개선
#### JWT 토큰 처리 개선
**파일**: `common/security/JwtTokenProvider.java`, `UserPrincipal.java`
**주요 변경**:
```java
public class JwtTokenProvider {
public String getUserId(String token) {
Claims claims = parseToken(token);
return claims.get("userId", String.class); // 명시적 타입 변환
}
public String getStoreId(String token) {
Claims claims = parseToken(token);
return claims.get("storeId", String.class);
}
}
```
**개선사항**:
- ✅ 타입 안전성 향상
- ✅ null 처리 개선
- ✅ 토큰 파싱 로직 강화
- ✅ 에러 메시지 개선
#### 개발 환경 인증 필터
**파일**: `event-service/.../DevAuthenticationFilter.java`
**개선사항**:
- ✅ 개발 환경용 Mock 인증
- ✅ JWT 토큰 파싱 개선
- ✅ 로깅 추가
---
### 8. 테스트 및 문서화
#### 통합 테스트 보고서
**파일**: `test/content-service-integration-test-results.md` (신규, 673줄)
**내용**:
- ✅ 9개 테스트 시나리오 실행 결과
- ✅ 성공률: 100% (9/9)
- ✅ HTTP 통신 검증
- ✅ Job 관리 메커니즘 검증
- ✅ EventId 기반 조회 검증
- ✅ 이미지 재생성 기능 검증
- ✅ 성능 분석 (평균 응답 시간 < 150ms)
#### 아키텍처 분석 문서
**파일**: `test/content-service-integration-analysis.md` (신규, 504줄)
**내용**:
- ✅ content-service API 구조 분석
- ✅ Redis 기반 Job 관리 메커니즘
- ✅ Kafka 연동 현황 분석
- ✅ 서비스 간 통신 구조
- ✅ 권장사항 및 개선 방향
#### Kafka 연동 테스트 보고서
**파일**: `test/test-kafka-integration-results.md` (신규, 348줄)
**내용**:
- ✅ event-service Kafka Producer/Consumer 검증
- ✅ Kafka 브로커 연결 테스트
- ✅ 메시지 발행/수신 검증
- ✅ Manual Acknowledgment 패턴 검증
- ✅ content-service Kafka Consumer 미구현 확인
#### API 테스트 결과
**파일**: `test/API-TEST-RESULT.md` (이동)
**내용**:
- ✅ 기존 API 테스트 결과
- ✅ test/ 폴더로 이동하여 정리
#### 테스트 자동화 스크립트
**파일**:
- `test-content-service.sh` (신규, 82줄)
- `run-content-service.sh` (신규, 80줄)
- `run-content-service.bat` (신규, 81줄)
**기능**:
- ✅ content-service 자동 테스트
- ✅ 서버 실행 스크립트 (Linux/Windows)
- ✅ 7가지 테스트 시나리오 자동 실행
- ✅ Health Check 및 API 검증
#### 테스트 데이터
**파일**:
- `test-integration-event.json`
- `test-integration-objective.json`
- `test-integration-ai-request.json`
- `test-image-generation.json`
- `test-ai-recommendation.json`
**목적**:
- ✅ 통합 테스트용 샘플 데이터
- ✅ API 테스트 자동화
- ✅ 재현 가능한 테스트 환경
---
### 9. 실행 환경 설정
#### IntelliJ 실행 프로파일 업데이트
**파일**:
- `.run/ContentServiceApplication.run.xml`
- `.run/AiServiceApplication.run.xml`
**변경사항**:
```xml
<envs>
<env name="SERVER_PORT" value="8084" />
<env name="REDIS_HOST" value="20.214.210.71" />
<env name="REDIS_PORT" value="6379" />
<env name="REDIS_PASSWORD" value="Hi5Jessica!" />
<env name="DB_HOST" value="4.217.131.139" />
<env name="DB_PORT" value="5432" />
<env name="REPLICATE_MOCK_ENABLED" value="true" />
<!-- JWT, Azure 설정 추가 -->
</envs>
```
**개선사항**:
- ✅ 환경 변수 명시적 설정
- ✅ Mock Mode 설정 추가
- ✅ 데이터베이스 연결 정보 명시
---
## 🔍 Kafka 아키텍처 현황
### 현재 구현된 아키텍처
```
┌─────────────────┐
│ event-service │
│ (Port 8081) │
└────────┬────────┘
├─── Kafka Producer ───→ Kafka Topic (image-generation-job)
│ │
│ │ (event-service Consumer가 수신)
│ ↓
│ ┌──────────────┐
│ │ event-service│
│ │ Consumer │
│ └──────────────┘
└─── Redis Job Data ───→ Redis Cache
┌───────┴────────┐
│ content-service│
│ (Port 8084) │
└────────────────┘
```
### 주요 발견사항
- ⚠️ **content-service에는 Kafka Consumer 미구현**
- ✅ Redis 기반 Job 관리로 서비스 간 통신
- ✅ event-service에서 Producer/Consumer 모두 구현
- ⚠️ 논리 아키텍처 설계와 실제 구현 불일치
### 권장사항
1. **단기**: 설계 문서를 실제 구현에 맞춰 업데이트
2. **중기**: API 문서 자동화 (Swagger/OpenAPI)
3. **장기**: content-service에 Kafka Consumer 추가 구현
---
## 📊 성능 및 품질 지표
### API 응답 시간
```
Health Check: < 50ms
GET 요청: 50-100ms
POST 요청: 100-150ms
```
### Job 처리 시간 (Mock Mode)
```
이미지 4개 생성: ~0.2초
이미지 1개 재생성: ~0.1초
```
### 테스트 성공률
```
통합 테스트: 100% (9/9 성공)
Kafka 연동: 100% (event-service)
API 엔드포인트: 100% (전체 정상)
```
### 코드 품질
```
추가된 코드: 2,795줄
제거된 코드: 222줄
순 증가: 2,573줄
변경된 파일: 60개
```
---
## 🚀 배포 준비 상태
### ✅ 완료된 작업
- [x] EventId/JobId 생성 로직 구현
- [x] Kafka 메시지 구조 개선
- [x] 데이터베이스 스키마 정의
- [x] content-service 통합 테스트 완료
- [x] API 문서화 및 테스트 보고서 작성
- [x] 테스트 자동화 스크립트 작성
### ⏳ 진행 예정 작업
- [ ] content-service Kafka Consumer 구현 (옵션)
- [ ] 프로덕션 환경 데이터베이스 마이그레이션
- [ ] Swagger/OpenAPI 문서 자동화
- [ ] 성능 모니터링 도구 설정
- [ ] 로그 수집 및 분석 시스템 구축
### ⚠️ 주의사항
1. **데이터베이스 마이그레이션**: 프로덕션 배포 전 백업 필수
2. **Kafka 메시지 호환성**: 기존 Consumer가 있다면 메시지 형식 변경 영향 확인
3. **ID 형식 변경**: 기존 데이터와의 호환성 검토 필요
4. **환경 변수**: 모든 환경에서 필요한 환경 변수 설정 확인
---
## 📝 주요 커밋 히스토리
```
3465a35 Merge branch 'feature/event' into develop
8ff79ca 테스트 결과 파일들을 test/ 폴더로 이동
336d811 content-service 통합 테스트 완료 및 보고서 작성
ee941e4 Event-AI Kafka 연동 개선 및 메시지 필드명 camelCase 변경
b71d27a 비즈니스 친화적 eventId 및 jobId 생성 로직 구현
34291e1 백엔드 서비스 구조 개선 및 데이터베이스 스키마 추가
```
---
## 🔗 관련 문서
1. **테스트 보고서**
- `test/content-service-integration-test-results.md`
- `test/test-kafka-integration-results.md`
- `test/API-TEST-RESULT.md`
2. **아키텍처 문서**
- `test/content-service-integration-analysis.md`
3. **데이터베이스**
- `develop/database/schema/create_event_tables.sql`
- `develop/database/migration/alter_event_id_to_varchar.sql`
4. **테스트 스크립트**
- `test-content-service.sh`
- `run-content-service.sh`
- `run-content-service.bat`
---
**작성자**: Backend Developer
**검토자**: System Architect
**최종 업데이트**: 2025-10-30 01:40

View File

@ -1,24 +0,0 @@
# Multi-stage build for Spring Boot application
FROM eclipse-temurin:21-jre-alpine AS builder
WORKDIR /app
COPY build/libs/*.jar app.jar
RUN java -Djarmode=layertools -jar app.jar extract
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
# Copy layers from builder
COPY --from=builder /app/dependencies/ ./
COPY --from=builder /app/spring-boot-loader/ ./
COPY --from=builder /app/snapshot-dependencies/ ./
COPY --from=builder /app/application/ ./
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=60s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8083/api/v1/ai-service/actuator/health || exit 1
ENTRYPOINT ["java", "org.springframework.boot.loader.launch.JarLauncher"]

View File

@ -1,7 +1,3 @@
bootJar {
archiveFileName = 'ai-service.jar'
}
dependencies { dependencies {
// Kafka Consumer // Kafka Consumer
implementation 'org.springframework.kafka:spring-kafka' implementation 'org.springframework.kafka:spring-kafka'

View File

@ -4,7 +4,6 @@ import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityCustomizer;
import org.springframework.security.config.annotation.web.configurers.AbstractHttpConfigurer; import org.springframework.security.config.annotation.web.configurers.AbstractHttpConfigurer;
import org.springframework.security.config.http.SessionCreationPolicy; import org.springframework.security.config.http.SessionCreationPolicy;
import org.springframework.security.web.SecurityFilterChain; import org.springframework.security.web.SecurityFilterChain;
@ -28,22 +27,21 @@ import java.util.List;
@EnableWebSecurity @EnableWebSecurity
public class SecurityConfig { public class SecurityConfig {
/**
* Security Filter Chain 설정
* - 모든 요청 허용 (내부 API)
* - CSRF 비활성화
* - Stateless 세션
*/
@Bean @Bean
public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception { public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
http http
// CSRF 비활성화 (REST API는 CSRF 불필요)
.csrf(AbstractHttpConfigurer::disable) .csrf(AbstractHttpConfigurer::disable)
// CORS 설정
.cors(cors -> cors.configurationSource(corsConfigurationSource())) .cors(cors -> cors.configurationSource(corsConfigurationSource()))
.sessionManagement(session -> session.sessionCreationPolicy(SessionCreationPolicy.STATELESS))
// 세션 사용 (JWT 기반 인증)
.sessionManagement(session ->
session.sessionCreationPolicy(SessionCreationPolicy.STATELESS)
)
// 모든 요청 허용 (테스트용)
.authorizeHttpRequests(auth -> auth .authorizeHttpRequests(auth -> auth
.requestMatchers("/health", "/actuator/**", "/v3/api-docs/**", "/swagger-ui/**").permitAll()
.requestMatchers("/internal/**").permitAll() // Internal API
.anyRequest().permitAll() .anyRequest().permitAll()
); );
@ -52,14 +50,11 @@ public class SecurityConfig {
/** /**
* CORS 설정 * CORS 설정
* - 모든 Origin 허용 (Swagger UI 테스트를 위해)
* - 모든 HTTP Method 허용
* - 모든 Header 허용
*/ */
@Bean @Bean
public CorsConfigurationSource corsConfigurationSource() { public CorsConfigurationSource corsConfigurationSource() {
CorsConfiguration configuration = new CorsConfiguration(); CorsConfiguration configuration = new CorsConfiguration();
configuration.setAllowedOriginPatterns(List.of("*")); // 모든 Origin 허용 configuration.setAllowedOrigins(Arrays.asList("http://localhost:3000", "http://localhost:8080"));
configuration.setAllowedMethods(Arrays.asList("GET", "POST", "PUT", "DELETE", "OPTIONS", "PATCH")); configuration.setAllowedMethods(Arrays.asList("GET", "POST", "PUT", "DELETE", "OPTIONS", "PATCH"));
configuration.setAllowedHeaders(List.of("*")); configuration.setAllowedHeaders(List.of("*"));
configuration.setAllowCredentials(true); configuration.setAllowCredentials(true);
@ -69,13 +64,4 @@ public class SecurityConfig {
source.registerCorsConfiguration("/**", configuration); source.registerCorsConfiguration("/**", configuration);
return source; return source;
} }
/**
* Chrome DevTools 요청 정적 리소스 요청을 Spring Security에서 제외
*/
@Bean
public WebSecurityCustomizer webSecurityCustomizer() {
return (web) -> web.ignoring()
.requestMatchers("/.well-known/**");
}
} }

View File

@ -20,10 +20,6 @@ public class SwaggerConfig {
@Bean @Bean
public OpenAPI openAPI() { public OpenAPI openAPI() {
Server vmServer = new Server();
vmServer.setUrl("http://kt-event-marketing-api.20.214.196.128.nip.io/api/v1/ai");
vmServer.setDescription("VM Development Server");
Server localServer = new Server(); Server localServer = new Server();
localServer.setUrl("http://localhost:8083"); localServer.setUrl("http://localhost:8083");
localServer.setDescription("Local Development Server"); localServer.setDescription("Local Development Server");
@ -63,6 +59,6 @@ public class SwaggerConfig {
return new OpenAPI() return new OpenAPI()
.info(info) .info(info)
.servers(List.of(vmServer, localServer, devServer, prodServer)); .servers(List.of(localServer, devServer, prodServer));
} }
} }

View File

@ -32,7 +32,7 @@ public class HealthController {
* 서비스 헬스체크 * 서비스 헬스체크
*/ */
@Operation(summary = "서비스 헬스체크", description = "AI Service 상태 및 외부 연동 확인") @Operation(summary = "서비스 헬스체크", description = "AI Service 상태 및 외부 연동 확인")
@GetMapping("/health") @GetMapping("/api/v1/ai-service/health")
public ResponseEntity<HealthCheckResponse> healthCheck() { public ResponseEntity<HealthCheckResponse> healthCheck() {
// Redis 상태 확인 // Redis 상태 확인
ServiceStatus redisStatus = checkRedis(); ServiceStatus redisStatus = checkRedis();

View File

@ -27,7 +27,7 @@ import java.util.Map;
@Slf4j @Slf4j
@Tag(name = "Internal API", description = "내부 서비스 간 통신용 API") @Tag(name = "Internal API", description = "내부 서비스 간 통신용 API")
@RestController @RestController
@RequestMapping("/jobs") @RequestMapping("/api/v1/ai-service/internal/jobs")
@RequiredArgsConstructor @RequiredArgsConstructor
public class InternalJobController { public class InternalJobController {

View File

@ -31,7 +31,7 @@ import java.util.Set;
@Slf4j @Slf4j
@Tag(name = "Internal API", description = "내부 서비스 간 통신용 API") @Tag(name = "Internal API", description = "내부 서비스 간 통신용 API")
@RestController @RestController
@RequestMapping("/recommendations") @RequestMapping("/api/v1/ai-service/internal/recommendations")
@RequiredArgsConstructor @RequiredArgsConstructor
public class InternalRecommendationController { public class InternalRecommendationController {

View File

@ -5,33 +5,31 @@ spring:
# Redis Configuration # Redis Configuration
data: data:
redis: redis:
host: ${REDIS_HOST:20.214.210.71} host: ${REDIS_HOST:redis-external} # Production: redis-external, Local: 20.214.210.71
port: ${REDIS_PORT:6379} port: ${REDIS_PORT:6379}
password: ${REDIS_PASSWORD:Hi5Jessica!} password: ${REDIS_PASSWORD:}
database: ${REDIS_DATABASE:3} database: ${REDIS_DATABASE:0} # AI Service uses database 3
timeout: ${REDIS_TIMEOUT:3000} timeout: ${REDIS_TIMEOUT:3000}
lettuce: lettuce:
pool: pool:
max-active: ${REDIS_POOL_MAX:8} max-active: 8
max-idle: ${REDIS_POOL_IDLE:8} max-idle: 8
min-idle: ${REDIS_POOL_MIN:2} min-idle: 2
max-wait: ${REDIS_POOL_WAIT:-1ms} max-wait: -1ms
# Kafka Consumer Configuration # Kafka Consumer Configuration
kafka: kafka:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS:20.249.182.13:9095,4.217.131.59:9095} bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS:localhost:9092}
consumer: consumer:
group-id: ${KAFKA_CONSUMER_GROUP:ai-service-consumers} group-id: ai-service-consumers
auto-offset-reset: earliest auto-offset-reset: earliest
enable-auto-commit: false enable-auto-commit: false
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties: properties:
spring.json.trusted.packages: "*" spring.json.trusted.packages: "*"
spring.json.use.type.headers: false max.poll.records: ${KAFKA_MAX_POLL_RECORDS:10}
spring.json.value.default.type: com.kt.ai.kafka.message.AIJobMessage session.timeout.ms: ${KAFKA_SESSION_TIMEOUT:30000}
max.poll.records: 10
session.timeout.ms: 30000
listener: listener:
ack-mode: manual ack-mode: manual
@ -39,7 +37,7 @@ spring:
server: server:
port: ${SERVER_PORT:8083} port: ${SERVER_PORT:8083}
servlet: servlet:
context-path: /api/v1/ai context-path: /
encoding: encoding:
charset: UTF-8 charset: UTF-8
enabled: true enabled: true
@ -47,13 +45,13 @@ server:
# JWT Configuration # JWT Configuration
jwt: jwt:
secret: ${JWT_SECRET:kt-event-marketing-secret-key-for-development-only-please-change-in-production} secret: ${JWT_SECRET:}
access-token-validity: ${JWT_ACCESS_TOKEN_VALIDITY:604800000} access-token-validity: ${JWT_ACCESS_TOKEN_VALIDITY:1800}
refresh-token-validity: ${JWT_REFRESH_TOKEN_VALIDITY:86400} refresh-token-validity: ${JWT_REFRESH_TOKEN_VALIDITY:86400}
# CORS Configuration # CORS Configuration
cors: cors:
allowed-origins: ${CORS_ALLOWED_ORIGINS:http://localhost:*,http://kt-event-marketing.20.214.196.128.nip.io} allowed-origins: ${CORS_ALLOWED_ORIGINS:http://localhost:3000,http://localhost:8080}
allowed-methods: ${CORS_ALLOWED_METHODS:GET,POST,PUT,DELETE,OPTIONS,PATCH} allowed-methods: ${CORS_ALLOWED_METHODS:GET,POST,PUT,DELETE,OPTIONS,PATCH}
allowed-headers: ${CORS_ALLOWED_HEADERS:*} allowed-headers: ${CORS_ALLOWED_HEADERS:*}
allow-credentials: ${CORS_ALLOW_CREDENTIALS:true} allow-credentials: ${CORS_ALLOW_CREDENTIALS:true}
@ -93,39 +91,45 @@ springdoc:
# Logging Configuration # Logging Configuration
logging: logging:
level: level:
root: ${LOG_LEVEL_ROOT:INFO} root: INFO
com.kt.ai: ${LOG_LEVEL_AI:DEBUG} com.kt.ai: DEBUG
org.springframework.kafka: ${LOG_LEVEL_KAFKA:INFO} org.springframework.kafka: INFO
org.springframework.data.redis: ${LOG_LEVEL_REDIS:INFO} org.springframework.data.redis: INFO
io.github.resilience4j: ${LOG_LEVEL_RESILIENCE4J:DEBUG} io.github.resilience4j: DEBUG
pattern: pattern:
console: "%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n" console: "%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n"
file: "%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n" file: "%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n"
file: file:
name: ${LOG_FILE_NAME:logs/ai-service.log} name: ${LOG_FILE:logs/ai-service.log}
logback: logback:
rollingpolicy: rollingpolicy:
max-file-size: ${LOG_FILE_MAX_SIZE:10MB} max-file-size: 10MB
max-history: ${LOG_FILE_MAX_HISTORY:7} max-history: 7
total-size-cap: ${LOG_FILE_TOTAL_CAP:100MB} total-size-cap: 100MB
# Kafka Topics Configuration # Kafka Topics Configuration
kafka: kafka:
topics: topics:
ai-job: ${KAFKA_TOPICS_AI_JOB:ai-event-generation-job} ai-job: ${KAFKA_TOPIC_AI_JOB:ai-event-generation-job}
ai-job-dlq: ${KAFKA_TOPICS_AI_JOB_DLQ:ai-event-generation-job-dlq} ai-job-dlq: ${KAFKA_TOPIC_AI_JOB_DLQ:ai-event-generation-job-dlq}
# AI API Configuration (실제 API 사용) # AI External API Configuration
ai: ai:
provider: ${AI_PROVIDER:CLAUDE}
claude: claude:
api-url: ${AI_CLAUDE_API_URL:https://api.anthropic.com/v1/messages} api-url: ${CLAUDE_API_URL:https://api.anthropic.com/v1/messages}
api-key: ${AI_CLAUDE_API_KEY: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} api-key: ${CLAUDE_API_KEY:}
anthropic-version: ${AI_CLAUDE_ANTHROPIC_VERSION:2023-06-01} anthropic-version: ${CLAUDE_ANTHROPIC_VERSION:2023-06-01}
model: ${AI_CLAUDE_MODEL:claude-sonnet-4-5-20250929} model: ${CLAUDE_MODEL:claude-3-5-sonnet-20241022}
max-tokens: ${AI_CLAUDE_MAX_TOKENS:4096} max-tokens: ${CLAUDE_MAX_TOKENS:4096}
temperature: ${AI_CLAUDE_TEMPERATURE:0.7} temperature: ${CLAUDE_TEMPERATURE:0.7}
timeout: ${AI_CLAUDE_TIMEOUT:300000} timeout: ${CLAUDE_TIMEOUT:300000} # 5 minutes
gpt4:
api-url: ${GPT4_API_URL:https://api.openai.com/v1/chat/completions}
api-key: ${GPT4_API_KEY:}
model: ${GPT4_MODEL:gpt-4-turbo-preview}
max-tokens: ${GPT4_MAX_TOKENS:4096}
timeout: ${GPT4_TIMEOUT:300000} # 5 minutes
provider: ${AI_PROVIDER:CLAUDE} # CLAUDE or GPT4
# Circuit Breaker Configuration # Circuit Breaker Configuration
resilience4j: resilience4j:

View File

@ -12,7 +12,7 @@
<entry key="DB_PASSWORD" value="Hi5Jessica!" /> <entry key="DB_PASSWORD" value="Hi5Jessica!" />
<!-- JPA Configuration --> <!-- JPA Configuration -->
<entry key="DDL_AUTO" value="create" /> <entry key="DDL_AUTO" value="update" />
<entry key="SHOW_SQL" value="true" /> <entry key="SHOW_SQL" value="true" />
<!-- Redis Configuration --> <!-- Redis Configuration -->
@ -24,7 +24,7 @@
<!-- Kafka Configuration (원격 서버) --> <!-- Kafka Configuration (원격 서버) -->
<entry key="KAFKA_ENABLED" value="true" /> <entry key="KAFKA_ENABLED" value="true" />
<entry key="KAFKA_BOOTSTRAP_SERVERS" value="20.249.182.13:9095,4.217.131.59:9095" /> <entry key="KAFKA_BOOTSTRAP_SERVERS" value="20.249.182.13:9095,4.217.131.59:9095" />
<entry key="KAFKA_CONSUMER_GROUP_ID" value="analytics-service-consumers-v3" /> <entry key="KAFKA_CONSUMER_GROUP_ID" value="analytics-service-consumers" />
<!-- Sample Data Configuration (MVP Only) --> <!-- Sample Data Configuration (MVP Only) -->
<!-- ⚠️ Kafka Producer로 이벤트 발행 (Consumer가 처리) --> <!-- ⚠️ Kafka Producer로 이벤트 발행 (Consumer가 처리) -->
@ -39,7 +39,7 @@
<entry key="JWT_REFRESH_TOKEN_VALIDITY" value="86400" /> <entry key="JWT_REFRESH_TOKEN_VALIDITY" value="86400" />
<!-- CORS Configuration --> <!-- CORS Configuration -->
<entry key="CORS_ALLOWED_ORIGINS" value="http://localhost:*,http://*.nip.io:*" /> <entry key="CORS_ALLOWED_ORIGINS" value="http://localhost:*" />
<!-- Logging Configuration --> <!-- Logging Configuration -->
<entry key="LOG_FILE" value="logs/analytics-service.log" /> <entry key="LOG_FILE" value="logs/analytics-service.log" />

View File

@ -1,24 +0,0 @@
# Multi-stage build for Spring Boot application
FROM eclipse-temurin:21-jre-alpine AS builder
WORKDIR /app
COPY analytics-service/build/libs/*.jar app.jar
RUN java -Djarmode=layertools -jar app.jar extract
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
# Copy layers from builder
COPY --from=builder /app/dependencies/ ./
COPY --from=builder /app/spring-boot-loader/ ./
COPY --from=builder /app/snapshot-dependencies/ ./
COPY --from=builder /app/application/ ./
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=60s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8086/api/v1/analytics/actuator/health || exit 1
ENTRYPOINT ["java", "org.springframework.boot.loader.launch.JarLauncher"]

View File

@ -1,7 +1,3 @@
bootJar {
archiveFileName = 'analytics-service.jar'
}
dependencies { dependencies {
// Kafka Consumer // Kafka Consumer
implementation 'org.springframework.kafka:spring-kafka' implementation 'org.springframework.kafka:spring-kafka'

View File

@ -1,108 +0,0 @@
# 백엔드-프론트엔드 API 연동 검증 및 수정 결과
**작업일시**: 2025-10-28
**브랜치**: feature/analytics
**작업 범위**: Analytics Service 백엔드 DTO 및 Service 수정
---
## 📝 수정 요약
### 1⃣ 필드명 통일 (프론트엔드 호환)
**목적**: 프론트엔드 Mock 데이터 필드명과 백엔드 Response DTO 필드명 일치
| 수정 전 (백엔드) | 수정 후 (백엔드) | 프론트엔드 |
|-----------------|----------------|-----------|
| `summary.totalParticipants` | `summary.participants` | `summary.participants` ✅ |
| `channelPerformance[].channelName` | `channelPerformance[].channel` | `channelPerformance[].channel` ✅ |
| `roi.totalInvestment` | `roi.totalCost` | `roiDetail.totalCost` ✅ |
### 2⃣ 증감 데이터 추가
**목적**: 프론트엔드에서 요구하는 증감 표시 및 목표값 제공
| 필드 | 타입 | 설명 | 현재 값 |
|-----|------|------|---------|
| `summary.participantsDelta` | `Integer` | 참여자 증감 (이전 기간 대비) | `0` (TODO: 계산 로직 필요) |
| `summary.targetRoi` | `Double` | 목표 ROI (%) | EventStats에서 가져옴 |
---
## 🔧 수정 파일 목록
### DTO (Response 구조 변경)
1. **AnalyticsSummary.java**
- ✅ `totalParticipants``participants`
- ✅ `participantsDelta` 필드 추가
- ✅ `targetRoi` 필드 추가
2. **ChannelSummary.java**
- ✅ `channelName``channel`
3. **RoiSummary.java**
- ✅ `totalInvestment``totalCost`
### Entity (데이터베이스 스키마 변경)
4. **EventStats.java**
- ✅ `targetRoi` 필드 추가 (`BigDecimal`, default: 0)
### Service (비즈니스 로직 수정)
5. **AnalyticsService.java**
- ✅ `.participants()` 사용
- ✅ `.participantsDelta(0)` 추가 (TODO 마킹)
- ✅ `.targetRoi()` 추가
- ✅ `.channel()` 사용
6. **ROICalculator.java**
- ✅ `.totalCost()` 사용
7. **UserAnalyticsService.java**
- ✅ `.participants()` 사용
- ✅ `.participantsDelta(0)` 추가
- ✅ `.channel()` 사용
- ✅ `.totalCost()` 사용
---
## ✅ 검증 결과
### 컴파일 성공
\`\`\`bash
$ ./gradlew analytics-service:compileJava
BUILD SUCCESSFUL in 8s
\`\`\`
---
## 📊 데이터베이스 스키마 변경
### EventStats 테이블
\`\`\`sql
ALTER TABLE event_stats
ADD COLUMN target_roi DECIMAL(10,2) DEFAULT 0.00;
\`\`\`
**⚠️ 주의사항**
- Spring Boot JPA `ddl-auto` 설정에 따라 자동 적용됨
---
## 📌 다음 단계
### 우선순위 HIGH
1. **프론트엔드 API 연동 테스트**
2. **participantsDelta 계산 로직 구현**
3. **targetRoi 데이터 입력** (Event Service 연동)
### 우선순위 MEDIUM
4. 시간대별 분석 구현
5. 참여자 프로필 구현
6. ROI 세분화 구현

View File

@ -63,7 +63,7 @@ public class AnalyticsBatchScheduler {
event.getEventId(), event.getEventTitle()); event.getEventId(), event.getEventTitle());
// refresh=true로 호출하여 캐시 갱신 외부 API 호출 // refresh=true로 호출하여 캐시 갱신 외부 API 호출
analyticsService.getDashboardData(event.getEventId(), true); analyticsService.getDashboardData(event.getEventId(), null, null, true);
successCount++; successCount++;
log.info("✅ 배치 갱신 완료: eventId={}", event.getEventId()); log.info("✅ 배치 갱신 완료: eventId={}", event.getEventId());
@ -99,7 +99,7 @@ public class AnalyticsBatchScheduler {
for (EventStats event : allEvents) { for (EventStats event : allEvents) {
try { try {
analyticsService.getDashboardData(event.getEventId(), true); analyticsService.getDashboardData(event.getEventId(), null, null, true);
log.debug("초기 데이터 로딩 완료: eventId={}", event.getEventId()); log.debug("초기 데이터 로딩 완료: eventId={}", event.getEventId());
} catch (Exception e) { } catch (Exception e) {
log.warn("초기 데이터 로딩 실패: eventId={}, error={}", log.warn("초기 데이터 로딩 실패: eventId={}, error={}",

View File

@ -17,13 +17,13 @@ import java.util.Map;
* Kafka Consumer 설정 * Kafka Consumer 설정
*/ */
@Configuration @Configuration
@ConditionalOnProperty(name = "spring.kafka.enabled", havingValue = "true", matchIfMissing = false) @ConditionalOnProperty(name = "spring.kafka.enabled", havingValue = "true", matchIfMissing = true)
public class KafkaConsumerConfig { public class KafkaConsumerConfig {
@Value("${spring.kafka.bootstrap-servers}") @Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers; private String bootstrapServers;
@Value("${spring.kafka.consumer.group-id:analytics-service-consumers-v3}") @Value("${spring.kafka.consumer.group-id:analytics-service}")
private String groupId; private String groupId;
@Bean @Bean

Some files were not shown because too many files have changed in this diff Show More