Compare commits

..

1 Commits

Author SHA1 Message Date
hyeda2020
be4fcc0dc3
Merge pull request #15 from ktds-dg0501/develop
Develop
2025-10-28 13:16:23 +09:00
419 changed files with 3520 additions and 37842 deletions

View File

@ -1,13 +1,10 @@
---
command: "/deploy-actions-cicd-guide-back"
description: "백엔드 GitHub Actions CI/CD 파이프라인 가이드 작성"
---
@cicd
'백엔드GitHubActions파이프라인작성가이드'에 따라 GitHub Actions를 이용한 CI/CD 가이드를 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보]

View File

@ -1,13 +1,10 @@
---
command: "/deploy-actions-cicd-guide-front"
description: "프론트엔드 GitHub Actions CI/CD 파이프라인 가이드 작성"
---
@cicd
'프론트엔드GitHubActions파이프라인작성가이드'에 따라 GitHub Actions를 이용한 CI/CD 가이드를 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보]

View File

@ -1,6 +1,5 @@
---
command: "/deploy-build-image-back"
description: "백엔드 컨테이너 이미지 작성"
---
@cicd

View File

@ -1,6 +1,5 @@
---
command: "/deploy-build-image-front"
description: "프론트엔드 컨테이너 이미지 작성"
---
@cicd

View File

@ -1,64 +1,81 @@
---
command: "/deploy-help"
description: "배포 작업 순서 및 명령어 안내"
---
# 배포 작업 순서
## 컨테이너 이미지 작성
## 1단계: 컨테이너 이미지 작성
### 백엔드
```
/deploy-build-image-back
- 백엔드 서비스들의 컨테이너 이미지를 작성합니다
```
- 백엔드컨테이너이미지작성가이드를 참고하여 컨테이너 이미지를 빌드합니다
### 프론트엔드
```
/deploy-build-image-front
- 프론트엔드 서비스의 컨테이너 이미지를 작성합니다
```
- 프론트엔드컨테이너이미지작성가이드를 참고하여 컨테이너 이미지를 빌드합니다
## 컨테이너 실행 가이드 작성
## 2단계: 컨테이너 실행 가이드 작성
### 백엔드
```
/deploy-run-container-guide-back
- 백엔드 컨테이너 실행 가이드를 작성합니다
- [실행정보] 섹션에 ACR명, VM 접속 정보 제공 필요
```
- 백엔드컨테이너실행방법가이드를 참고하여 컨테이너 실행 방법을 작성합니다
- 실행정보(ACR명, VM정보)가 필요합니다
### 프론트엔드
```
/deploy-run-container-guide-front
- 프론트엔드 컨테이너 실행 가이드를 작성합니다
- [실행정보] 섹션에 시스템명, ACR명, VM 접속 정보 제공 필요
```
- 프론트엔드컨테이너실행방법가이드를 참고하여 컨테이너 실행 방법을 작성합니다
- 실행정보(시스템명, ACR명, VM정보)가 필요합니다
## Kubernetes 배포 가이드 작성
## 3단계: Kubernetes 배포 가이드 작성
### 백엔드
```
/deploy-k8s-guide-back
- 백엔드 서비스 Kubernetes 배포 가이드를 작성합니다
- [실행정보] 섹션에 ACR명, k8s명, 네임스페이스, 리소스 정보 제공 필요
```
- 백엔드배포가이드를 참고하여 쿠버네티스 배포 방법을 작성합니다
- 실행정보(ACR명, k8s명, 네임스페이스, 리소스 설정)가 필요합니다
### 프론트엔드
```
/deploy-k8s-guide-front
- 프론트엔드 서비스 Kubernetes 배포 가이드를 작성합니다
- [실행정보] 섹션에 시스템명, ACR명, k8s명, 네임스페이스, Gateway Host 정보 제공 필요
```
- 프론트엔드배포가이드를 참고하여 쿠버네티스 배포 방법을 작성합니다
- 실행정보(시스템명, ACR명, k8s명, 네임스페이스, Gateway Host, 리소스 설정)가 필요합니다
## CI/CD 파이프라인 작성
### Jenkins CI/CD
## 4단계: CI/CD 파이프라인 구성
### Jenkins 사용 시
#### 백엔드
```
/deploy-jenkins-cicd-guide-back
- Jenkins를 이용한 백엔드 CI/CD 파이프라인 가이드를 작성합니다
- [실행정보] 섹션에 ACR_NAME, RESOURCE_GROUP, AKS_CLUSTER, NAMESPACE 제공 필요
```
- 백엔드Jenkins파이프라인작성가이드를 참고하여 Jenkins CI/CD 파이프라인을 구성합니다
#### 프론트엔드
```
/deploy-jenkins-cicd-guide-front
- Jenkins를 이용한 프론트엔드 CI/CD 파이프라인 가이드를 작성합니다
- [실행정보] 섹션에 SYSTEM_NAME, ACR_NAME, RESOURCE_GROUP, AKS_CLUSTER, NAMESPACE 제공 필요
```
- 프론트엔드Jenkins파이프라인작성가이드를 참고하여 Jenkins CI/CD 파이프라인을 구성합니다
### GitHub Actions CI/CD
### GitHub Actions 사용 시
#### 백엔드
```
/deploy-actions-cicd-guide-back
- GitHub Actions를 이용한 백엔드 CI/CD 파이프라인 가이드를 작성합니다
- [실행정보] 섹션에 ACR_NAME, RESOURCE_GROUP, AKS_CLUSTER, NAMESPACE 제공 필요
```
- 백엔드GitHubActions파이프라인작성가이드를 참고하여 GitHub Actions CI/CD 파이프라인을 구성합니다
#### 프론트엔드
```
/deploy-actions-cicd-guide-front
- GitHub Actions를 이용한 프론트엔드 CI/CD 파이프라인 가이드를 작성합니다
- [실행정보] 섹션에 SYSTEM_NAME, ACR_NAME, RESOURCE_GROUP, AKS_CLUSTER, NAMESPACE 제공 필요
```
- 프론트엔드GitHubActions파이프라인작성가이드를 참고하여 GitHub Actions CI/CD 파이프라인을 구성합니다
---
**참고**: 각 명령어 실행 시 [실행정보] 섹션에 필요한 정보를 함께 제공해야 합니다.
## 참고사항
- 각 명령 실행 전 필요한 실행정보를 프롬프트에 포함해야 합니다
- 실행정보가 없으면 안내 메시지가 표시되며 작업이 중단됩니다
- CI/CD 도구는 Jenkins 또는 GitHub Actions 중 선택하여 사용합니다

View File

@ -1,13 +1,10 @@
---
command: "/deploy-jenkins-cicd-guide-back"
description: "백엔드 Jenkins CI/CD 파이프라인 가이드 작성"
---
@cicd
'백엔드Jenkins파이프라인작성가이드'에 따라 Jenkins를 이용한 CI/CD 가이드를 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보]

View File

@ -1,13 +1,10 @@
---
command: "/deploy-jenkins-cicd-guide-front"
description: "프론트엔드 Jenkins CI/CD 파이프라인 가이드 작성"
---
@cicd
'프론트엔드Jenkins파이프라인작성가이드'에 따라 Jenkins를 이용한 CI/CD 가이드를 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보]

View File

@ -1,13 +1,10 @@
---
command: "/deploy-k8s-guide-back"
description: "백엔드 Kubernetes 배포 가이드 작성"
---
@cicd
'백엔드배포가이드'에 따라 백엔드 서비스 배포 방법을 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보]

View File

@ -1,13 +1,10 @@
---
command: "/deploy-k8s-guide-front"
description: "프론트엔드 Kubernetes 배포 가이드 작성"
---
@cicd
'프론트엔드배포가이드'에 따라 프론트엔드 서비스 배포 방법을 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보]

View File

@ -1,13 +1,10 @@
---
command: "/deploy-run-container-guide-back"
description: "백엔드 컨테이너 실행방법 가이드 작성"
---
@cicd
'백엔드컨테이너실행방법가이드'에 따라 컨테이너 실행 가이드를 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보]

View File

@ -1,13 +1,10 @@
---
command: "/deploy-run-container-guide-front"
description: "프론트엔드 컨테이너 실행방법 가이드 작성"
---
@cicd
'프론트엔드컨테이너실행방법가이드'에 따라 컨테이너 실행 가이드를 작성해 주세요.
프롬프트에 '[실행정보]'항목이 없으면 수행을 중단하고 안내 메시지를 표시해 주세요.
{안내메시지}
'[실행정보]'섹션 하위에 아래 예와 같이 필요한 정보를 제시해 주세요.
[실행정보]

186
.github/README.md vendored
View File

@ -1,186 +0,0 @@
# KT Event Marketing - CI/CD Infrastructure
이 디렉토리는 KT Event Marketing 백엔드 서비스의 CI/CD 인프라를 포함합니다.
## 디렉토리 구조
```
.github/
├── README.md # 이 파일
├── workflows/
│ └── backend-cicd.yaml # GitHub Actions 워크플로우
├── kustomize/ # Kubernetes 매니페스트 관리
│ ├── base/ # 기본 리소스 정의
│ │ ├── kustomization.yaml
│ │ ├── cm-common.yaml
│ │ ├── secret-common.yaml
│ │ ├── secret-imagepull.yaml
│ │ ├── ingress.yaml
│ │ └── {service}-*.yaml # 각 서비스별 리소스
│ └── overlays/ # 환경별 설정
│ ├── dev/
│ │ ├── kustomization.yaml
│ │ └── *-patch.yaml # 1 replica, 256Mi-1024Mi
│ ├── staging/
│ │ ├── kustomization.yaml
│ │ └── *-patch.yaml # 2 replicas, 512Mi-2048Mi
│ └── prod/
│ ├── kustomization.yaml
│ └── *-patch.yaml # 3 replicas, 1024Mi-4096Mi
├── config/
│ ├── deploy_env_vars_dev # Dev 환경 변수
│ ├── deploy_env_vars_staging # Staging 환경 변수
│ └── deploy_env_vars_prod # Prod 환경 변수
└── scripts/
├── deploy.sh # 수동 배포 스크립트
├── generate-patches.sh # 패치 파일 생성 스크립트
└── copy-manifests-to-base.py # 매니페스트 복사 스크립트
```
## 주요 파일 설명
### workflows/backend-cicd.yaml
GitHub Actions 워크플로우 정의 파일입니다.
**트리거**:
- develop 브랜치 push → dev 환경 배포
- main 브랜치 push → prod 환경 배포
- Manual workflow dispatch → 원하는 환경과 서비스 선택
**Jobs**:
1. `detect-changes`: 변경된 서비스 감지
2. `build-and-push`: 서비스 빌드 및 ACR 푸시
3. `deploy`: AKS에 배포
4. `notify`: 배포 결과 알림
### kustomize/base/kustomization.yaml
모든 환경에서 공통으로 사용하는 기본 리소스를 정의합니다.
**포함 리소스**:
- Common ConfigMaps and Secrets
- Ingress
- 7개 서비스의 Deployment, Service, ConfigMap, Secret
### kustomize/overlays/{env}/kustomization.yaml
환경별 설정을 오버라이드합니다.
**주요 차이점**:
- 이미지 태그 (dev/staging/prod)
- Replica 수 (1/2/3)
- 리소스 할당량 (작음/중간/큼)
### scripts/deploy.sh
로컬에서 수동 배포를 위한 스크립트입니다.
**사용법**:
```bash
# 모든 서비스를 dev 환경에 배포
./scripts/deploy.sh dev
# 특정 서비스만 prod 환경에 배포
./scripts/deploy.sh prod user-service
```
## 배포 프로세스
### 자동 배포 (GitHub Actions)
1. **Dev 환경**:
```bash
git checkout develop
git push origin develop
```
2. **Prod 환경**:
```bash
git checkout main
git merge develop
git push origin main
```
3. **수동 배포**:
- GitHub Actions UI → Run workflow
- Environment 선택 (dev/staging/prod)
- Service 선택 (all 또는 특정 서비스)
### 수동 배포 (로컬)
```bash
# 사전 요구사항: Azure CLI, kubectl, kustomize 설치
# Azure 로그인 필요
# Dev 환경에 모든 서비스 배포
./.github/scripts/deploy.sh dev
# Prod 환경에 user-service만 배포
./.github/scripts/deploy.sh prod user-service
```
## 환경별 설정
| 환경 | 브랜치 | 이미지 태그 | Replicas | CPU Request | Memory Request |
|------|--------|-------------|----------|-------------|----------------|
| Dev | develop | dev | 1 | 256m | 256Mi |
| Staging | manual | staging | 2 | 512m | 512Mi |
| Prod | main | prod | 3 | 1024m | 1024Mi |
## 서비스 목록
1. **user-service** (8081) - 사용자 관리
2. **event-service** (8082) - 이벤트 관리
3. **ai-service** (8083) - AI 기반 콘텐츠 생성
4. **content-service** (8084) - 콘텐츠 관리
5. **distribution-service** (8085) - 경품 배포
6. **participation-service** (8086) - 이벤트 참여
7. **analytics-service** (8087) - 분석 및 통계
## 모니터링
### Pod 상태 확인
```bash
kubectl get pods -n kt-event-marketing
```
### 로그 확인
```bash
# 실시간 로그
kubectl logs -n kt-event-marketing -l app=user-service -f
# 이전 컨테이너 로그
kubectl logs -n kt-event-marketing <pod-name> --previous
```
### 리소스 사용량
```bash
# Pod 리소스
kubectl top pods -n kt-event-marketing
# Node 리소스
kubectl top nodes
```
## 트러블슈팅
상세한 트러블슈팅 가이드는 [deployment/cicd/CICD-GUIDE.md](../../deployment/cicd/CICD-GUIDE.md)를 참조하세요.
**주요 문제 해결**:
- ImagePullBackOff → ACR Secret 확인
- CrashLoopBackOff → 로그 확인 및 환경 변수 검증
- Readiness Probe Failed → Context Path 및 Actuator 경로 확인
## 롤백
```bash
# 이전 버전으로 롤백
kubectl rollout undo deployment/user-service -n kt-event-marketing
# 특정 리비전으로 롤백
kubectl rollout undo deployment/user-service --to-revision=2 -n kt-event-marketing
```
## 참고 자료
- [CI/CD 가이드 (한글)](../../deployment/cicd/CICD-GUIDE.md)
- [GitHub Actions 공식 문서](https://docs.github.com/en/actions)
- [Kustomize 공식 문서](https://kustomize.io/)
- [Azure AKS 공식 문서](https://docs.microsoft.com/en-us/azure/aks/)

View File

@ -1,11 +0,0 @@
# Development Environment Variables
ENVIRONMENT=dev
ACR_NAME=acrdigitalgarage01
RESOURCE_GROUP=rg-digitalgarage-01
AKS_CLUSTER=aks-digitalgarage-01
NAMESPACE=kt-event-marketing
REPLICAS=1
CPU_REQUEST=256m
MEMORY_REQUEST=256Mi
CPU_LIMIT=1024m
MEMORY_LIMIT=1024Mi

View File

@ -1,11 +0,0 @@
# Production Environment Variables
ENVIRONMENT=prod
ACR_NAME=acrdigitalgarage01
RESOURCE_GROUP=rg-digitalgarage-01
AKS_CLUSTER=aks-digitalgarage-01
NAMESPACE=kt-event-marketing
REPLICAS=3
CPU_REQUEST=1024m
MEMORY_REQUEST=1024Mi
CPU_LIMIT=4096m
MEMORY_LIMIT=4096Mi

View File

@ -1,11 +0,0 @@
# Staging Environment Variables
ENVIRONMENT=staging
ACR_NAME=acrdigitalgarage01
RESOURCE_GROUP=rg-digitalgarage-01
AKS_CLUSTER=aks-digitalgarage-01
NAMESPACE=kt-event-marketing
REPLICAS=2
CPU_REQUEST=512m
MEMORY_REQUEST=512Mi
CPU_LIMIT=2048m
MEMORY_LIMIT=2048Mi

View File

@ -1,55 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-ai-service
data:
# Server Configuration
SERVER_PORT: "8083"
# Redis Configuration (service-specific)
REDIS_DATABASE: "3"
REDIS_TIMEOUT: "3000"
REDIS_POOL_MIN: "2"
# Kafka Configuration (service-specific)
KAFKA_CONSUMER_GROUP: "ai-service-consumers"
# Kafka Topics Configuration
KAFKA_TOPICS_AI_JOB: "ai-event-generation-job"
KAFKA_TOPICS_AI_JOB_DLQ: "ai-event-generation-job-dlq"
# AI Provider Configuration
AI_PROVIDER: "CLAUDE"
AI_CLAUDE_API_URL: "https://api.anthropic.com/v1/messages"
AI_CLAUDE_ANTHROPIC_VERSION: "2023-06-01"
AI_CLAUDE_MODEL: "claude-sonnet-4-5-20250929"
AI_CLAUDE_MAX_TOKENS: "4096"
AI_CLAUDE_TEMPERATURE: "0.7"
AI_CLAUDE_TIMEOUT: "300000"
# Circuit Breaker Configuration
RESILIENCE4J_CIRCUITBREAKER_FAILURE_RATE_THRESHOLD: "50"
RESILIENCE4J_CIRCUITBREAKER_SLOW_CALL_RATE_THRESHOLD: "50"
RESILIENCE4J_CIRCUITBREAKER_SLOW_CALL_DURATION_THRESHOLD: "60s"
RESILIENCE4J_CIRCUITBREAKER_PERMITTED_CALLS_HALF_OPEN: "3"
RESILIENCE4J_CIRCUITBREAKER_SLIDING_WINDOW_SIZE: "10"
RESILIENCE4J_CIRCUITBREAKER_MINIMUM_CALLS: "5"
RESILIENCE4J_CIRCUITBREAKER_WAIT_DURATION_OPEN: "60s"
RESILIENCE4J_TIMELIMITER_TIMEOUT_DURATION: "300s"
# Redis Cache TTL Configuration (seconds)
CACHE_TTL_RECOMMENDATION: "86400"
CACHE_TTL_JOB_STATUS: "86400"
CACHE_TTL_TREND: "3600"
CACHE_TTL_FALLBACK: "604800"
# Logging Configuration
LOG_LEVEL_ROOT: "INFO"
LOG_LEVEL_AI: "DEBUG"
LOG_LEVEL_KAFKA: "INFO"
LOG_LEVEL_REDIS: "INFO"
LOG_LEVEL_RESILIENCE4J: "DEBUG"
LOG_FILE_NAME: "logs/ai-service.log"
LOG_FILE_MAX_SIZE: "10MB"
LOG_FILE_MAX_HISTORY: "7"
LOG_FILE_TOTAL_CAP: "100MB"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-service
labels:
app: ai-service
spec:
replicas: 1
selector:
matchLabels:
app: ai-service
template:
metadata:
labels:
app: ai-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: ai-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8083
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-ai-service
- secretRef:
name: secret-common
- secretRef:
name: secret-ai-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health
port: 8083
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8083
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8083
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -1,8 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-ai-service
type: Opaque
stringData:
# Claude API Key
AI_CLAUDE_API_KEY: "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"

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: ai-service
labels:
app: ai-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8083
protocol: TCP
name: http
selector:
app: ai-service

View File

@ -1,37 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-analytics-service
data:
# Server Configuration
SERVER_PORT: "8086"
# Database Configuration
DB_HOST: "analytic-postgresql"
DB_PORT: "5432"
DB_NAME: "analytics_db"
DB_USERNAME: "eventuser"
# Redis Configuration (service-specific)
REDIS_DATABASE: "5"
# Kafka Configuration (service-specific)
KAFKA_ENABLED: "true"
KAFKA_CONSUMER_GROUP_ID: "analytics-service"
# Sample Data Configuration (MVP only)
SAMPLE_DATA_ENABLED: "true"
# Batch Scheduler Configuration
BATCH_REFRESH_INTERVAL: "300000" # 5분 (밀리초)
BATCH_INITIAL_DELAY: "30000" # 30초 (밀리초)
BATCH_ENABLED: "true"
# Logging Configuration
LOG_LEVEL_APP: "INFO"
LOG_LEVEL_WEB: "INFO"
LOG_LEVEL_SQL: "WARN"
LOG_LEVEL_SQL_TYPE: "WARN"
SHOW_SQL: "false"
DDL_AUTO: "update"
LOG_FILE: "logs/analytics-service.log"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: analytics-service
labels:
app: analytics-service
spec:
replicas: 1
selector:
matchLabels:
app: analytics-service
template:
metadata:
labels:
app: analytics-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: analytics-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8086
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-analytics-service
- secretRef:
name: secret-common
- secretRef:
name: secret-analytics-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health/liveness
port: 8086
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 30
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8086
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8086
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 3

View File

@ -1,7 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-analytics-service
type: Opaque
stringData:
DB_PASSWORD: "Hi5Jessica!"

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: analytics-service
labels:
app: analytics-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8086
protocol: TCP
name: http
selector:
app: analytics-service

View File

@ -1,46 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-common
data:
# Redis Configuration
REDIS_ENABLED: "true"
REDIS_HOST: "redis"
REDIS_PORT: "6379"
REDIS_TIMEOUT: "2000ms"
REDIS_POOL_MAX: "8"
REDIS_POOL_IDLE: "8"
REDIS_POOL_MIN: "0"
REDIS_POOL_WAIT: "-1ms"
# Kafka Configuration
KAFKA_BOOTSTRAP_SERVERS: "20.249.182.13:9095,4.217.131.59:9095"
EXCLUDE_KAFKA: ""
EXCLUDE_REDIS: ""
# CORS Configuration
CORS_ALLOWED_ORIGINS: "http://localhost:8081,http://localhost:8082,http://localhost:8083,http://localhost:8084,http://kt-event-marketing.20.214.196.128.nip.io"
CORS_ALLOWED_METHODS: "GET,POST,PUT,DELETE,OPTIONS,PATCH"
CORS_ALLOWED_HEADERS: "*"
CORS_ALLOW_CREDENTIALS: "true"
CORS_MAX_AGE: "3600"
# JWT Configuration
JWT_ACCESS_TOKEN_VALIDITY: "604800000"
JWT_REFRESH_TOKEN_VALIDITY: "86400000"
# JPA Configuration
DDL_AUTO: "update"
SHOW_SQL: "false"
JPA_DIALECT: "org.hibernate.dialect.PostgreSQLDialect"
H2_CONSOLE_ENABLED: "false"
# Logging Configuration
LOG_LEVEL_APP: "INFO"
LOG_LEVEL_WEB: "INFO"
LOG_LEVEL_SQL: "WARN"
LOG_LEVEL_SQL_TYPE: "WARN"
LOG_LEVEL_ROOT: "INFO"
LOG_FILE_MAX_SIZE: "10MB"
LOG_FILE_MAX_HISTORY: "7"
LOG_FILE_TOTAL_CAP: "100MB"

View File

@ -1,24 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-content-service
data:
# Server Configuration
SERVER_PORT: "8084"
# Redis Configuration (service-specific)
REDIS_DATABASE: "1"
# Replicate API Configuration (Stable Diffusion)
REPLICATE_API_URL: "https://api.replicate.com"
REPLICATE_MODEL_VERSION: "stability-ai/sdxl:39ed52f2a78e934b3ba6e2a89f5b1c712de7dfea535525255b1aa35c5565e08b"
# HuggingFace API Configuration
HUGGINGFACE_API_URL: "https://api-inference.huggingface.co"
HUGGINGFACE_MODEL: "runwayml/stable-diffusion-v1-5"
# Azure Blob Storage Configuration
AZURE_CONTAINER_NAME: "content-images"
# Logging Configuration
LOG_FILE_PATH: "logs/content-service.log"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: content-service
labels:
app: content-service
spec:
replicas: 1
selector:
matchLabels:
app: content-service
template:
metadata:
labels:
app: content-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: content-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8084
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-content-service
- secretRef:
name: secret-common
- secretRef:
name: secret-content-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /api/v1/content/actuator/health
port: 8084
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /api/v1/content/actuator/health/readiness
port: 8084
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /api/v1/content/actuator/health/liveness
port: 8084
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -1,14 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-content-service
type: Opaque
stringData:
# Azure Blob Storage Connection String
AZURE_STORAGE_CONNECTION_STRING: "DefaultEndpointsProtocol=https;AccountName=blobkteventstorage;AccountKey=tcBN7mAfojbl0uGsOpU7RNuKNhHnzmwDiWjN31liSMVSrWaEK+HHnYKZrjBXXAC6ZPsuxUDlsf8x+AStd++QYg==;EndpointSuffix=core.windows.net"
# Replicate API Token
REPLICATE_API_TOKEN: ""
# HuggingFace API Token
HUGGINGFACE_API_TOKEN: ""

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: content-service
labels:
app: content-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8084
protocol: TCP
name: http
selector:
app: content-service

View File

@ -1,28 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-distribution-service
data:
# Server Configuration
SERVER_PORT: "8085"
# Database Configuration
DB_HOST: "distribution-postgresql"
DB_PORT: "5432"
DB_NAME: "distributiondb"
DB_USERNAME: "eventuser"
# Kafka Configuration
KAFKA_ENABLED: "true"
KAFKA_CONSUMER_GROUP: "distribution-service"
# External Channel APIs
URIDONGNETV_API_URL: "http://localhost:9001/api/uridongnetv"
RINGOBIZ_API_URL: "http://localhost:9002/api/ringobiz"
GINITV_API_URL: "http://localhost:9003/api/ginitv"
INSTAGRAM_API_URL: "http://localhost:9004/api/instagram"
NAVER_API_URL: "http://localhost:9005/api/naver"
KAKAO_API_URL: "http://localhost:9006/api/kakao"
# Logging Configuration
LOG_FILE: "logs/distribution-service.log"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: distribution-service
labels:
app: distribution-service
spec:
replicas: 1
selector:
matchLabels:
app: distribution-service
template:
metadata:
labels:
app: distribution-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: distribution-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8085
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-distribution-service
- secretRef:
name: secret-common
- secretRef:
name: secret-distribution-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /api/v1/distribution/actuator/health
port: 8085
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /api/v1/distribution/actuator/health/readiness
port: 8085
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /api/v1/distribution/actuator/health/liveness
port: 8085
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -1,7 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-distribution-service
type: Opaque
stringData:
DB_PASSWORD: "Hi5Jessica!"

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: distribution-service
labels:
app: distribution-service
spec:
type: ClusterIP
selector:
app: distribution-service
ports:
- name: http
port: 80
targetPort: 8085
protocol: TCP

View File

@ -1,28 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-event-service
data:
# Server Configuration
SERVER_PORT: "8080"
# Database Configuration
DB_HOST: "event-postgresql"
DB_PORT: "5432"
DB_NAME: "eventdb"
DB_USERNAME: "eventuser"
# Redis Configuration (service-specific)
REDIS_DATABASE: "2"
# Kafka Configuration (service-specific)
KAFKA_CONSUMER_GROUP: "event-service-consumers"
# Service URLs
CONTENT_SERVICE_URL: "http://content-service"
DISTRIBUTION_SERVICE_URL: "http://distribution-service"
# Logging Configuration
LOG_LEVEL: "INFO"
SQL_LOG_LEVEL: "WARN"
LOG_FILE: "logs/event-service.log"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service
labels:
app: event-service
spec:
replicas: 1
selector:
matchLabels:
app: event-service
template:
metadata:
labels:
app: event-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: event-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-event-service
- secretRef:
name: secret-common
- secretRef:
name: secret-event-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -1,8 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-event-service
type: Opaque
stringData:
# Database Password
DB_PASSWORD: "Hi5Jessica!"

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: event-service
labels:
app: event-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: event-service

View File

@ -1,116 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kt-event-marketing
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx
rules:
- host: kt-event-marketing-api.20.214.196.128.nip.io
http:
paths:
# User Service
- path: /api/v1/users
pathType: Prefix
backend:
service:
name: user-service
port:
number: 80
# Content Service
- path: /api/v1/content
pathType: Prefix
backend:
service:
name: content-service
port:
number: 80
# Event Service
- path: /api/v1/events
pathType: Prefix
backend:
service:
name: event-service
port:
number: 80
- path: /api/v1/jobs
pathType: Prefix
backend:
service:
name: event-service
port:
number: 80
- path: /api/v1/redis-test
pathType: Prefix
backend:
service:
name: event-service
port:
number: 80
# AI Service
- path: /api/v1/ai-service
pathType: Prefix
backend:
service:
name: ai-service
port:
number: 80
# Participation Service
- path: /api/v1/participations
pathType: Prefix
backend:
service:
name: participation-service
port:
number: 80
- path: /api/v1/winners
pathType: Prefix
backend:
service:
name: participation-service
port:
number: 80
- path: /debug
pathType: Prefix
backend:
service:
name: participation-service
port:
number: 80
# Analytics Service - Event Analytics
- path: /api/v1/events/([0-9]+)/analytics
pathType: ImplementationSpecific
backend:
service:
name: analytics-service
port:
number: 80
# Analytics Service - User Analytics
- path: /api/v1/users/([0-9]+)/analytics
pathType: ImplementationSpecific
backend:
service:
name: analytics-service
port:
number: 80
# Distribution Service
- path: /distribution
pathType: Prefix
backend:
service:
name: distribution-service
port:
number: 80

View File

@ -1,76 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Common resources
resources:
# Common ConfigMaps and Secrets
- cm-common.yaml
- secret-common.yaml
- secret-imagepull.yaml
# Ingress
- ingress.yaml
# user-service
- user-service-deployment.yaml
- user-service-service.yaml
- user-service-cm-user-service.yaml
- user-service-secret-user-service.yaml
# event-service
- event-service-deployment.yaml
- event-service-service.yaml
- event-service-cm-event-service.yaml
- event-service-secret-event-service.yaml
# ai-service
- ai-service-deployment.yaml
- ai-service-service.yaml
- ai-service-cm-ai-service.yaml
- ai-service-secret-ai-service.yaml
# content-service
- content-service-deployment.yaml
- content-service-service.yaml
- content-service-cm-content-service.yaml
- content-service-secret-content-service.yaml
# distribution-service
- distribution-service-deployment.yaml
- distribution-service-service.yaml
- distribution-service-cm-distribution-service.yaml
- distribution-service-secret-distribution-service.yaml
# participation-service
- participation-service-deployment.yaml
- participation-service-service.yaml
- participation-service-cm-participation-service.yaml
- participation-service-secret-participation-service.yaml
# analytics-service
- analytics-service-deployment.yaml
- analytics-service-service.yaml
- analytics-service-cm-analytics-service.yaml
- analytics-service-secret-analytics-service.yaml
# Common labels for all resources
commonLabels:
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/part-of: kt-event-marketing
# Image tag replacement (will be overridden by overlays)
images:
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service
newTag: latest
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service
newTag: latest

View File

@ -1,24 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-participation-service
data:
# Server Configuration
SERVER_PORT: "8084"
# Database Configuration
DB_HOST: "participation-postgresql"
DB_PORT: "5432"
DB_NAME: "participationdb"
DB_USERNAME: "eventuser"
# Redis Configuration (service-specific)
REDIS_DATABASE: "4"
# Kafka Configuration (service-specific)
KAFKA_CONSUMER_GROUP: "participation-service-consumers"
# Logging Configuration
LOG_LEVEL: "INFO"
SHOW_SQL: "false"
LOG_FILE: "logs/participation-service.log"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: participation-service
labels:
app: participation-service
spec:
replicas: 1
selector:
matchLabels:
app: participation-service
template:
metadata:
labels:
app: participation-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: participation-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8084
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-participation-service
- secretRef:
name: secret-common
- secretRef:
name: secret-participation-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health/liveness
port: 8084
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 30
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8084
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8084
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 3

View File

@ -1,7 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-participation-service
type: Opaque
stringData:
DB_PASSWORD: "Hi5Jessica!"

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: participation-service
labels:
app: participation-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8084
protocol: TCP
name: http
selector:
app: participation-service

View File

@ -1,11 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-common
type: Opaque
stringData:
# Redis Password
REDIS_PASSWORD: "Hi5Jessica!"
# JWT Secret
JWT_SECRET: "QL0czzXckz18kHnxpaTDoWFkq+3qKO7VQXeNvf2bOoU="

View File

@ -1,16 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: kt-event-marketing
type: kubernetes.io/dockerconfigjson
stringData:
.dockerconfigjson: |
{
"auths": {
"acrdigitalgarage01.azurecr.io": {
"username": "acrdigitalgarage01",
"password": "+OY+rmOagorjWvQe/tTk6oqvnZI8SmNbY/Y2o5EDcY+ACRDCDbYk",
"auth": "YWNyZGlnaXRhbGdhcmFnZTAxOitPWStybU9hZ29yald2UWUvdFRrNm9xdm5aSThTbU5iWS9ZMm81RURjWStBQ1JEQ0RiWWs="
}
}
}

View File

@ -1,31 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-user-service
data:
# Server Configuration
SERVER_PORT: "8081"
# Database Configuration
DB_URL: "jdbc:postgresql://user-postgresql:5432/userdb"
DB_HOST: "user-postgresql"
DB_PORT: "5432"
DB_NAME: "userdb"
DB_USERNAME: "eventuser"
DB_DRIVER: "org.postgresql.Driver"
DB_KIND: "postgresql"
DB_POOL_MAX: "20"
DB_POOL_MIN: "5"
DB_CONN_TIMEOUT: "30000"
DB_IDLE_TIMEOUT: "600000"
DB_MAX_LIFETIME: "1800000"
DB_LEAK_THRESHOLD: "60000"
# Redis Configuration (service-specific)
REDIS_DATABASE: "0"
# Kafka Configuration (service-specific)
KAFKA_CONSUMER_GROUP: "user-service-consumers"
# Logging Configuration
LOG_FILE_PATH: "logs/user-service.log"

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
labels:
app: user-service
spec:
replicas: 1
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
imagePullSecrets:
- name: kt-event-marketing
containers:
- name: user-service
image: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8081
name: http
envFrom:
- configMapRef:
name: cm-common
- configMapRef:
name: cm-user-service
- secretRef:
name: secret-common
- secretRef:
name: secret-user-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"
startupProbe:
httpGet:
path: /actuator/health
port: 8081
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8081
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8081
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3

View File

@ -1,8 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: secret-user-service
type: Opaque
stringData:
# Database Password
DB_PASSWORD: "Hi5Jessica!"

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: user-service
labels:
app: user-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
selector:
app: user-service

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-service
spec:
replicas: 1
template:
spec:
containers:
- name: ai-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: analytics-service
spec:
replicas: 1
template:
spec:
containers:
- name: analytics-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: content-service
spec:
replicas: 1
template:
spec:
containers:
- name: content-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: distribution-service
spec:
replicas: 1
template:
spec:
containers:
- name: distribution-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service
spec:
replicas: 1
template:
spec:
containers:
- name: event-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,38 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kt-event-marketing
bases:
- ../../base
# Environment-specific labels
commonLabels:
environment: dev
# Environment-specific patches
patchesStrategicMerge:
- user-service-patch.yaml
- event-service-patch.yaml
- ai-service-patch.yaml
- content-service-patch.yaml
- distribution-service-patch.yaml
- participation-service-patch.yaml
- analytics-service-patch.yaml
# Override image tags for dev environment
images:
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service
newTag: dev
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service
newTag: dev

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: participation-service
spec:
replicas: 1
template:
spec:
containers:
- name: participation-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 1
template:
spec:
containers:
- name: user-service
resources:
requests:
cpu: "256m"
memory: "256Mi"
limits:
cpu: "1024m"
memory: "1024Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-service
spec:
replicas: 3
template:
spec:
containers:
- name: ai-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: analytics-service
spec:
replicas: 3
template:
spec:
containers:
- name: analytics-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: content-service
spec:
replicas: 3
template:
spec:
containers:
- name: content-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: distribution-service
spec:
replicas: 3
template:
spec:
containers:
- name: distribution-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service
spec:
replicas: 3
template:
spec:
containers:
- name: event-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,38 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kt-event-marketing
bases:
- ../../base
# Environment-specific labels
commonLabels:
environment: prod
# Environment-specific patches
patchesStrategicMerge:
- user-service-patch.yaml
- event-service-patch.yaml
- ai-service-patch.yaml
- content-service-patch.yaml
- distribution-service-patch.yaml
- participation-service-patch.yaml
- analytics-service-patch.yaml
# Override image tags for prod environment
images:
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service
newTag: prod
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service
newTag: prod

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: participation-service
spec:
replicas: 3
template:
spec:
containers:
- name: participation-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 3
template:
spec:
containers:
- name: user-service
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-service
spec:
replicas: 2
template:
spec:
containers:
- name: ai-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: analytics-service
spec:
replicas: 2
template:
spec:
containers:
- name: analytics-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: content-service
spec:
replicas: 2
template:
spec:
containers:
- name: content-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: distribution-service
spec:
replicas: 2
template:
spec:
containers:
- name: distribution-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service
spec:
replicas: 2
template:
spec:
containers:
- name: event-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,38 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kt-event-marketing
bases:
- ../../base
# Environment-specific labels
commonLabels:
environment: staging
# Environment-specific patches
patchesStrategicMerge:
- user-service-patch.yaml
- event-service-patch.yaml
- ai-service-patch.yaml
- content-service-patch.yaml
- distribution-service-patch.yaml
- participation-service-patch.yaml
- analytics-service-patch.yaml
# Override image tags for staging environment
images:
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service
newTag: staging
- name: acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service
newTag: staging

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: participation-service
spec:
replicas: 2
template:
spec:
containers:
- name: participation-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,17 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 2
template:
spec:
containers:
- name: user-service
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"

View File

@ -1,79 +0,0 @@
#!/usr/bin/env python3
"""
Copy K8s manifests to Kustomize base directory and remove namespace declarations
"""
import os
import shutil
import yaml
from pathlib import Path
# Service names
SERVICES = [
'user-service',
'event-service',
'ai-service',
'content-service',
'distribution-service',
'participation-service',
'analytics-service'
]
# Base directories
SOURCE_DIR = Path('deployment/k8s')
BASE_DIR = Path('.github/kustomize/base')
def remove_namespace_from_yaml(content):
"""Remove namespace field from YAML content"""
docs = list(yaml.safe_load_all(content))
for doc in docs:
if doc and isinstance(doc, dict):
if 'metadata' in doc and 'namespace' in doc['metadata']:
del doc['metadata']['namespace']
return yaml.dump_all(docs, default_flow_style=False, sort_keys=False)
def copy_and_process_file(source_path, dest_path):
"""Copy file and remove namespace declaration"""
with open(source_path, 'r', encoding='utf-8') as f:
content = f.read()
# Remove namespace from YAML
processed_content = remove_namespace_from_yaml(content)
# Write to destination
dest_path.parent.mkdir(parents=True, exist_ok=True)
with open(dest_path, 'w', encoding='utf-8') as f:
f.write(processed_content)
print(f"✓ Copied and processed: {source_path} -> {dest_path}")
def main():
print("Starting manifest copy to Kustomize base...")
# Copy common resources
print("\n[Common Resources]")
common_dir = SOURCE_DIR / 'common'
for file in ['cm-common.yaml', 'secret-common.yaml', 'secret-imagepull.yaml', 'ingress.yaml']:
source = common_dir / file
if source.exists():
dest = BASE_DIR / file
copy_and_process_file(source, dest)
# Copy service-specific resources
print("\n[Service Resources]")
for service in SERVICES:
service_dir = SOURCE_DIR / service
if not service_dir.exists():
print(f"⚠ Service directory not found: {service_dir}")
continue
print(f"\nProcessing {service}...")
for file in service_dir.glob('*.yaml'):
dest = BASE_DIR / f"{service}-{file.name}"
copy_and_process_file(file, dest)
print("\n✅ All manifests copied to base directory!")
if __name__ == '__main__':
main()

View File

@ -1,181 +0,0 @@
#!/bin/bash
set -e
###############################################################################
# Backend Services Deployment Script for AKS
#
# Usage:
# ./deploy.sh <environment> [service-name]
#
# Arguments:
# environment - Target environment (dev, staging, prod)
# service-name - Specific service to deploy (optional, deploys all if not specified)
#
# Examples:
# ./deploy.sh dev # Deploy all services to dev
# ./deploy.sh prod user-service # Deploy only user-service to prod
###############################################################################
# Color output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Functions
log_info() {
echo -e "${GREEN}[INFO]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Validate arguments
if [ $# -lt 1 ]; then
log_error "Usage: $0 <environment> [service-name]"
log_error "Environment must be one of: dev, staging, prod"
exit 1
fi
ENVIRONMENT=$1
SERVICE=${2:-all}
# Validate environment
if [[ ! "$ENVIRONMENT" =~ ^(dev|staging|prod)$ ]]; then
log_error "Invalid environment: $ENVIRONMENT"
log_error "Must be one of: dev, staging, prod"
exit 1
fi
# Load environment variables
ENV_FILE=".github/config/deploy_env_vars_${ENVIRONMENT}"
if [ ! -f "$ENV_FILE" ]; then
log_error "Environment file not found: $ENV_FILE"
exit 1
fi
source "$ENV_FILE"
log_info "Loaded environment configuration: $ENVIRONMENT"
# Service list
SERVICES=(
"user-service"
"event-service"
"ai-service"
"content-service"
"distribution-service"
"participation-service"
"analytics-service"
)
# Validate service if specified
if [ "$SERVICE" != "all" ]; then
if [[ ! " ${SERVICES[@]} " =~ " ${SERVICE} " ]]; then
log_error "Invalid service: $SERVICE"
log_error "Must be one of: ${SERVICES[*]}"
exit 1
fi
SERVICES=("$SERVICE")
fi
log_info "Services to deploy: ${SERVICES[*]}"
# Check prerequisites
log_info "Checking prerequisites..."
if ! command -v az &> /dev/null; then
log_error "Azure CLI not found. Please install Azure CLI."
exit 1
fi
if ! command -v kubectl &> /dev/null; then
log_error "kubectl not found. Please install kubectl."
exit 1
fi
if ! command -v kustomize &> /dev/null; then
log_error "kustomize not found. Please install kustomize."
exit 1
fi
# Azure login check
log_info "Checking Azure authentication..."
if ! az account show &> /dev/null; then
log_error "Not logged in to Azure. Please run 'az login'"
exit 1
fi
# Get AKS credentials
log_info "Getting AKS credentials..."
az aks get-credentials \
--resource-group "$RESOURCE_GROUP" \
--name "$AKS_CLUSTER" \
--overwrite-existing
# Check namespace
log_info "Checking namespace: $NAMESPACE"
if ! kubectl get namespace "$NAMESPACE" &> /dev/null; then
log_warn "Namespace $NAMESPACE does not exist. Creating..."
kubectl create namespace "$NAMESPACE"
fi
# Build and deploy with Kustomize
OVERLAY_DIR=".github/kustomize/overlays/${ENVIRONMENT}"
if [ ! -d "$OVERLAY_DIR" ]; then
log_error "Kustomize overlay directory not found: $OVERLAY_DIR"
exit 1
fi
log_info "Building Kustomize manifests for $ENVIRONMENT..."
cd "$OVERLAY_DIR"
# Update image tags
log_info "Updating image tags to: $ENVIRONMENT"
kustomize edit set image \
${ACR_NAME}.azurecr.io/kt-event-marketing/user-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/event-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/ai-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/content-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/distribution-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/participation-service:${ENVIRONMENT} \
${ACR_NAME}.azurecr.io/kt-event-marketing/analytics-service:${ENVIRONMENT}
# Apply manifests
log_info "Applying manifests to AKS..."
kustomize build . | kubectl apply -f -
cd - > /dev/null
# Wait for deployments
log_info "Waiting for deployments to be ready..."
for service in "${SERVICES[@]}"; do
log_info "Waiting for $service deployment..."
if ! kubectl rollout status deployment/"$service" -n "$NAMESPACE" --timeout=5m; then
log_error "Deployment of $service failed!"
exit 1
fi
log_info "$service is ready"
done
# Verify deployment
log_info "Verifying deployment..."
echo ""
echo "=== Pods Status ==="
kubectl get pods -n "$NAMESPACE" -l app.kubernetes.io/part-of=kt-event-marketing
echo ""
echo "=== Services ==="
kubectl get svc -n "$NAMESPACE"
echo ""
echo "=== Ingress ==="
kubectl get ingress -n "$NAMESPACE"
log_info "Deployment completed successfully!"
log_info "Environment: $ENVIRONMENT"
log_info "Services: ${SERVICES[*]}"

View File

@ -1,51 +0,0 @@
#!/bin/bash
SERVICES=(user-service event-service ai-service content-service distribution-service participation-service analytics-service)
# Staging patches (2 replicas, increased resources)
for service in "${SERVICES[@]}"; do
cat > ".github/kustomize/overlays/staging/${service}-patch.yaml" << YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${service}
spec:
replicas: 2
template:
spec:
containers:
- name: ${service}
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "2048m"
memory: "2048Mi"
YAML
done
# Prod patches (3 replicas, maximum resources)
for service in "${SERVICES[@]}"; do
cat > ".github/kustomize/overlays/prod/${service}-patch.yaml" << YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${service}
spec:
replicas: 3
template:
spec:
containers:
- name: ${service}
resources:
requests:
cpu: "1024m"
memory: "1024Mi"
limits:
cpu: "4096m"
memory: "4096Mi"
YAML
done
echo "✅ Generated all patch files for staging and prod"

View File

@ -1,207 +0,0 @@
name: Backend CI/CD Pipeline
on:
# push:
# branches:
# - develop
# - main
# paths:
# - '*-service/**'
# - '.github/workflows/backend-cicd.yaml'
# - '.github/kustomize/**'
pull_request:
branches:
- develop
- main
paths:
- '*-service/**'
workflow_dispatch:
inputs:
environment:
description: 'Target environment'
required: true
type: choice
options:
- dev
- staging
- prod
service:
description: 'Service to deploy (all for all services)'
required: true
default: 'all'
env:
ACR_NAME: acrdigitalgarage01
RESOURCE_GROUP: rg-digitalgarage-01
AKS_CLUSTER: aks-digitalgarage-01
NAMESPACE: kt-event-marketing
JDK_VERSION: '21'
jobs:
detect-changes:
name: Detect Changed Services
runs-on: ubuntu-latest
outputs:
services: ${{ steps.detect.outputs.services }}
environment: ${{ steps.env.outputs.environment }}
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Determine environment
id: env
run: |
if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then
echo "environment=${{ github.event.inputs.environment }}" >> $GITHUB_OUTPUT
elif [ "${{ github.ref }}" = "refs/heads/main" ]; then
echo "environment=prod" >> $GITHUB_OUTPUT
elif [ "${{ github.ref }}" = "refs/heads/develop" ]; then
echo "environment=dev" >> $GITHUB_OUTPUT
else
echo "environment=dev" >> $GITHUB_OUTPUT
fi
- name: Detect changed services
id: detect
run: |
if [ "${{ github.event_name }}" = "workflow_dispatch" ] && [ "${{ github.event.inputs.service }}" != "all" ]; then
echo "services=[\"${{ github.event.inputs.service }}\"]" >> $GITHUB_OUTPUT
elif [ "${{ github.event_name }}" = "workflow_dispatch" ] && [ "${{ github.event.inputs.service }}" = "all" ]; then
echo "services=[\"user-service\",\"event-service\",\"ai-service\",\"content-service\",\"distribution-service\",\"participation-service\",\"analytics-service\"]" >> $GITHUB_OUTPUT
else
CHANGED_SERVICES=$(git diff --name-only ${{ github.event.before }} ${{ github.sha }} | \
grep -E '^(user|event|ai|content|distribution|participation|analytics)-service/' | \
cut -d'/' -f1 | sort -u | \
jq -R -s -c 'split("\n") | map(select(length > 0))')
if [ "$CHANGED_SERVICES" = "[]" ] || [ -z "$CHANGED_SERVICES" ]; then
echo "services=[\"user-service\",\"event-service\",\"ai-service\",\"content-service\",\"distribution-service\",\"participation-service\",\"analytics-service\"]" >> $GITHUB_OUTPUT
else
echo "services=$CHANGED_SERVICES" >> $GITHUB_OUTPUT
fi
fi
build-and-push:
name: Build and Push - ${{ matrix.service }}
needs: detect-changes
runs-on: ubuntu-latest
strategy:
matrix:
service: ${{ fromJson(needs.detect-changes.outputs.services) }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up JDK ${{ env.JDK_VERSION }}
uses: actions/setup-java@v4
with:
java-version: ${{ env.JDK_VERSION }}
distribution: 'temurin'
cache: 'gradle'
- name: Grant execute permission for gradlew
run: chmod +x gradlew
- name: Build with Gradle
run: ./gradlew ${{ matrix.service }}:build -x test
# - name: Run tests
# run: ./gradlew ${{ matrix.service }}:test
- name: Build JAR
run: ./gradlew ${{ matrix.service }}:bootJar
- name: Log in to Azure Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.ACR_NAME }}.azurecr.io
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: ./${{ matrix.service }}
file: ./${{ matrix.service }}/Dockerfile
push: true
tags: |
${{ env.ACR_NAME }}.azurecr.io/kt-event-marketing/${{ matrix.service }}:${{ needs.detect-changes.outputs.environment }}
${{ env.ACR_NAME }}.azurecr.io/kt-event-marketing/${{ matrix.service }}:${{ github.sha }}
${{ env.ACR_NAME }}.azurecr.io/kt-event-marketing/${{ matrix.service }}:latest
deploy:
name: Deploy to AKS - ${{ needs.detect-changes.outputs.environment }}
needs: [detect-changes, build-and-push]
runs-on: ubuntu-latest
environment: ${{ needs.detect-changes.outputs.environment }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Azure login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Get AKS credentials
run: |
az aks get-credentials \
--resource-group ${{ env.RESOURCE_GROUP }} \
--name ${{ env.AKS_CLUSTER }} \
--overwrite-existing
- name: Setup Kustomize
uses: imranismail/setup-kustomize@v2
- name: Deploy with Kustomize
run: |
cd .github/kustomize/overlays/${{ needs.detect-changes.outputs.environment }}
kustomize edit set image \
acrdigitalgarage01.azurecr.io/kt-event-marketing/user-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/event-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/ai-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/content-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/distribution-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/participation-service:${{ needs.detect-changes.outputs.environment }} \
acrdigitalgarage01.azurecr.io/kt-event-marketing/analytics-service:${{ needs.detect-changes.outputs.environment }}
kustomize build . | kubectl apply -f -
- name: Wait for deployment rollout
run: |
for service in $(echo '${{ needs.detect-changes.outputs.services }}' | jq -r '.[]'); do
echo "Waiting for ${service} deployment..."
kubectl rollout status deployment/${service} -n ${{ env.NAMESPACE }} --timeout=5m
done
- name: Verify deployment
run: |
echo "=== Pods Status ==="
kubectl get pods -n ${{ env.NAMESPACE }} -l app.kubernetes.io/part-of=kt-event-marketing
echo "=== Services ==="
kubectl get svc -n ${{ env.NAMESPACE }}
echo "=== Ingress ==="
kubectl get ingress -n ${{ env.NAMESPACE }}
notify:
name: Notify Deployment Result
needs: [detect-changes, deploy]
runs-on: ubuntu-latest
if: always()
steps:
- name: Deployment Success
if: needs.deploy.result == 'success'
run: |
echo "✅ Deployment to ${{ needs.detect-changes.outputs.environment }} succeeded!"
echo "Services: ${{ needs.detect-changes.outputs.services }}"
- name: Deployment Failure
if: needs.deploy.result == 'failure'
run: |
echo "❌ Deployment to ${{ needs.detect-changes.outputs.environment }} failed!"
echo "Services: ${{ needs.detect-changes.outputs.services }}"
exit 1

2
.gitignore vendored
View File

@ -61,5 +61,3 @@ k8s/**/*-local.yaml
# Gradle (로컬 환경 설정)
gradle.properties
*.hprof
test-data.json

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

View File

@ -1,31 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="AiServiceApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot" nameIsGenerated="true">
<option name="ACTIVE_PROFILES" />
<module name="kt-event-marketing.ai-service.main" />
<option name="SPRING_BOOT_MAIN_CLASS" value="com.kt.ai.AiApplication" />
<extension name="coverage">
<pattern>
<option name="PATTERN" value="com.kt.ai.*" />
<option name="ENABLED" value="true" />
</pattern>
</extension>
<envs>
<env name="SERVER_PORT" value="8081" />
<env name="DB_HOST" value="4.230.112.141" />
<env name="DB_PORT" value="5432" />
<env name="DB_NAME" value="aidb" />
<env name="DB_USERNAME" value="eventuser" />
<env name="DB_PASSWORD" value="Hi5Jessica!" />
<env name="REDIS_HOST" value="20.214.210.71" />
<env name="REDIS_PORT" value="6379" />
<env name="REDIS_PASSWORD" value="Hi5Jessica!" />
<env name="KAFKA_BOOTSTRAP_SERVERS" value="4.230.50.63:9092" />
<env name="KAFKA_CONSUMER_GROUP" value="ai" />
<env name="JPA_DDL_AUTO" value="update" />
<env name="JPA_SHOW_SQL" value="false" />
</envs>
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component>

View File

@ -1,31 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="AnalyticsServiceApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot" nameIsGenerated="true">
<option name="ACTIVE_PROFILES" />
<module name="kt-event-marketing.analytics-service.main" />
<option name="SPRING_BOOT_MAIN_CLASS" value="com.kt.analytics.AnalyticsApplication" />
<extension name="coverage">
<pattern>
<option name="PATTERN" value="com.kt.analytics.*" />
<option name="ENABLED" value="true" />
</pattern>
</extension>
<envs>
<env name="SERVER_PORT" value="8087" />
<env name="DB_HOST" value="4.230.49.9" />
<env name="DB_PORT" value="5432" />
<env name="DB_NAME" value="analyticdb" />
<env name="DB_USERNAME" value="eventuser" />
<env name="DB_PASSWORD" value="Hi5Jessica!" />
<env name="REDIS_HOST" value="20.214.210.71" />
<env name="REDIS_PORT" value="6379" />
<env name="REDIS_PASSWORD" value="Hi5Jessica!" />
<env name="KAFKA_BOOTSTRAP_SERVERS" value="4.230.50.63:9092" />
<env name="KAFKA_CONSUMER_GROUP" value="analytic" />
<env name="JPA_DDL_AUTO" value="update" />
<env name="JPA_SHOW_SQL" value="false" />
</envs>
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component>

View File

@ -1,29 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="ContentServiceApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot" nameIsGenerated="true">
<option name="ACTIVE_PROFILES" />
<module name="kt-event-marketing.content-service.main" />
<option name="SPRING_BOOT_MAIN_CLASS" value="com.kt.content.ContentApplication" />
<extension name="coverage">
<pattern>
<option name="PATTERN" value="com.kt.content.*" />
<option name="ENABLED" value="true" />
</pattern>
</extension>
<envs>
<env name="SERVER_PORT" value="8084" />
<env name="DB_HOST" value="4.217.131.139" />
<env name="DB_PORT" value="5432" />
<env name="DB_NAME" value="contentdb" />
<env name="DB_USERNAME" value="eventuser" />
<env name="DB_PASSWORD" value="Hi5Jessica!" />
<env name="REDIS_HOST" value="20.214.210.71" />
<env name="REDIS_PORT" value="6379" />
<env name="REDIS_PASSWORD" value="Hi5Jessica!" />
<env name="JPA_DDL_AUTO" value="update" />
<env name="JPA_SHOW_SQL" value="false" />
</envs>
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component>

View File

@ -1,31 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="DistributionServiceApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot" nameIsGenerated="true">
<option name="ACTIVE_PROFILES" />
<module name="kt-event-marketing.distribution-service.main" />
<option name="SPRING_BOOT_MAIN_CLASS" value="com.kt.distribution.DistributionApplication" />
<extension name="coverage">
<pattern>
<option name="PATTERN" value="com.kt.distribution.*" />
<option name="ENABLED" value="true" />
</pattern>
</extension>
<envs>
<env name="SERVER_PORT" value="8085" />
<env name="DB_HOST" value="4.217.133.59" />
<env name="DB_PORT" value="5432" />
<env name="DB_NAME" value="distributiondb" />
<env name="DB_USERNAME" value="eventuser" />
<env name="DB_PASSWORD" value="Hi5Jessica!" />
<env name="REDIS_HOST" value="20.214.210.71" />
<env name="REDIS_PORT" value="6379" />
<env name="REDIS_PASSWORD" value="Hi5Jessica!" />
<env name="KAFKA_BOOTSTRAP_SERVERS" value="4.230.50.63:9092" />
<env name="KAFKA_CONSUMER_GROUP" value="distribution-service" />
<env name="JPA_DDL_AUTO" value="update" />
<env name="JPA_SHOW_SQL" value="false" />
</envs>
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component>

View File

@ -1,31 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="EventServiceApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot" nameIsGenerated="true">
<option name="ACTIVE_PROFILES" />
<module name="kt-event-marketing.event-service.main" />
<option name="SPRING_BOOT_MAIN_CLASS" value="com.kt.event.EventApplication" />
<extension name="coverage">
<pattern>
<option name="PATTERN" value="com.kt.event.*" />
<option name="ENABLED" value="true" />
</pattern>
</extension>
<envs>
<env name="SERVER_PORT" value="8082" />
<env name="DB_HOST" value="20.249.177.232" />
<env name="DB_PORT" value="5432" />
<env name="DB_NAME" value="eventdb" />
<env name="DB_USERNAME" value="eventuser" />
<env name="DB_PASSWORD" value="Hi5Jessica!" />
<env name="REDIS_HOST" value="20.214.210.71" />
<env name="REDIS_PORT" value="6379" />
<env name="REDIS_PASSWORD" value="Hi5Jessica!" />
<env name="KAFKA_BOOTSTRAP_SERVERS" value="4.230.50.63:9092" />
<env name="DISTRIBUTION_SERVICE_URL" value="http://localhost:8085" />
<env name="JPA_DDL_AUTO" value="update" />
<env name="JPA_SHOW_SQL" value="false" />
</envs>
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component>

View File

@ -1,29 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="UserServiceApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot" nameIsGenerated="true">
<option name="ACTIVE_PROFILES" />
<module name="kt-event-marketing.user-service.main" />
<option name="SPRING_BOOT_MAIN_CLASS" value="com.kt.user.UserApplication" />
<extension name="coverage">
<pattern>
<option name="PATTERN" value="com.kt.user.*" />
<option name="ENABLED" value="true" />
</pattern>
</extension>
<envs>
<env name="SERVER_PORT" value="8083" />
<env name="DB_HOST" value="20.249.125.115" />
<env name="DB_PORT" value="5432" />
<env name="DB_NAME" value="userdb" />
<env name="DB_USERNAME" value="eventuser" />
<env name="DB_PASSWORD" value="Hi5Jessica!" />
<env name="REDIS_HOST" value="20.214.210.71" />
<env name="REDIS_PORT" value="6379" />
<env name="REDIS_PASSWORD" value="Hi5Jessica!" />
<env name="JPA_DDL_AUTO" value="update" />
<env name="JPA_SHOW_SQL" value="false" />
</envs>
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component>

View File

@ -1,84 +0,0 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="analytics-service" type="GradleRunConfiguration" factoryName="Gradle">
<ExternalSystemSettings>
<option name="env">
<map>
<!-- Database Configuration -->
<entry key="DB_KIND" value="postgresql" />
<entry key="DB_HOST" value="4.230.49.9" />
<entry key="DB_PORT" value="5432" />
<entry key="DB_NAME" value="analyticdb" />
<entry key="DB_USERNAME" value="eventuser" />
<entry key="DB_PASSWORD" value="Hi5Jessica!" />
<!-- JPA Configuration -->
<entry key="DDL_AUTO" value="create" />
<entry key="SHOW_SQL" value="true" />
<!-- Redis Configuration -->
<entry key="REDIS_HOST" value="20.214.210.71" />
<entry key="REDIS_PORT" value="6379" />
<entry key="REDIS_PASSWORD" value="Hi5Jessica!" />
<entry key="REDIS_DATABASE" value="5" />
<!-- Kafka Configuration (원격 서버) -->
<entry key="KAFKA_ENABLED" value="true" />
<entry key="KAFKA_BOOTSTRAP_SERVERS" value="20.249.182.13:9095,4.217.131.59:9095" />
<entry key="KAFKA_CONSUMER_GROUP_ID" value="analytics-service-consumers-v3" />
<!-- Sample Data Configuration (MVP Only) -->
<!-- ⚠️ Kafka Producer로 이벤트 발행 (Consumer가 처리) -->
<entry key="SAMPLE_DATA_ENABLED" value="true" />
<!-- Server Configuration -->
<entry key="SERVER_PORT" value="8086" />
<!-- JWT Configuration -->
<entry key="JWT_SECRET" value="dev-jwt-secret-key-for-development-only-kt-event-marketing" />
<entry key="JWT_ACCESS_TOKEN_VALIDITY" value="1800" />
<entry key="JWT_REFRESH_TOKEN_VALIDITY" value="86400" />
<!-- CORS Configuration -->
<entry key="CORS_ALLOWED_ORIGINS" value="http://localhost:*" />
<!-- Logging Configuration -->
<entry key="LOG_FILE" value="logs/analytics-service.log" />
<entry key="LOG_LEVEL_APP" value="DEBUG" />
<entry key="LOG_LEVEL_WEB" value="INFO" />
<entry key="LOG_LEVEL_SQL" value="DEBUG" />
<entry key="LOG_LEVEL_SQL_TYPE" value="TRACE" />
</map>
</option>
<option name="executionName" />
<option name="externalProjectPath" value="$PROJECT_DIR$" />
<option name="externalSystemIdString" value="GRADLE" />
<option name="scriptParameters" value="" />
<option name="taskDescriptions">
<list />
</option>
<option name="taskNames">
<list>
<option value="analytics-service:bootRun" />
</list>
</option>
<option name="vmOptions" />
</ExternalSystemSettings>
<ExternalSystemDebugServerProcess>true</ExternalSystemDebugServerProcess>
<ExternalSystemReattachDebugProcess>true</ExternalSystemReattachDebugProcess>
<EXTENSION ID="com.intellij.execution.ExternalSystemRunConfigurationJavaExtension">
<extension name="net.ashald.envfile">
<option name="IS_ENABLED" value="false" />
<option name="IS_SUBST" value="false" />
<option name="IS_PATH_MACRO_SUPPORTED" value="false" />
<option name="IS_IGNORE_MISSING_FILES" value="false" />
<option name="IS_ENABLE_EXPERIMENTAL_INTEGRATIONS" value="false" />
<ENTRIES>
<ENTRY IS_ENABLED="true" PARSER="runconfig" IS_EXECUTABLE="false" />
</ENTRIES>
</extension>
</EXTENSION>
<DebugAllEnabled>false</DebugAllEnabled>
<RunAsTest>false</RunAsTest>
<method v="2" />
</configuration>
</component>

View File

@ -1,244 +0,0 @@
# API 연동 테스트 결과 보고서
**테스트 일시**: 2025-10-29
**테스트 대상**: 프론트엔드(localhost:3000)와 event-service(localhost:8080) API 연동
---
## ✅ 테스트 결과 요약
### 1. 서비스 실행 상태
- **프론트엔드**: ✅ Next.js 서버 (port 3000) 정상 실행
- **백엔드**: ✅ Event-service (port 8080) 정상 실행
- **데이터베이스**: ✅ PostgreSQL 연결 정상 (헬스체크 통과)
### 2. API 연동 테스트
#### 2.1 백엔드 API 직접 호출 테스트
**테스트 명령어**:
```bash
curl -X GET "http://localhost:8080/api/v1/events?page=0&size=20" \
-H "Authorization: Bearer <JWT_TOKEN>" \
-H "Content-Type: application/json"
```
**결과**: ✅ **성공**
- 응답 코드: 200 OK
- 조회된 이벤트: 8개
- 응답 형식: JSON (표준 API 응답 포맷)
- 페이지네이션: 정상 작동
**샘플 응답 데이터**:
```json
{
"success": true,
"data": {
"content": [
{
"eventId": "2a91c77c-9276-49d3-94d5-0ab8f0b3d343",
"userId": "11111111-1111-1111-1111-111111111111",
"storeId": "22222222-2222-2222-2222-222222222222",
"objective": "awareness",
"status": "DRAFT",
"createdAt": "2025-10-29T11:08:38.556326"
}
// ... 7개 더
],
"page": 0,
"size": 20,
"totalElements": 8,
"totalPages": 1
}
}
```
#### 2.2 인증 테스트
**JWT 토큰**: ✅ 정상 작동
- 토큰 생성 스크립트: `generate-test-token.py`
- 유효 기간: 365일 (테스트용)
- 알고리즘: HS256
- Secret: 백엔드와 일치
#### 2.3 프론트엔드 설정
**환경 변수 파일**: `.env.local` 생성 완료
```env
NEXT_PUBLIC_API_BASE_URL=http://localhost:8081
NEXT_PUBLIC_EVENT_HOST=http://localhost:8080
NEXT_PUBLIC_API_VERSION=v1
```
**현재 상태**: ⚠️ **Mock 데이터 사용 중**
- 파일: `src/app/(main)/events/page.tsx`
- 이벤트 목록 페이지가 하드코딩된 Mock 데이터 표시
- 실제 API 연동 코드 미구현 상태
---
## 📊 API 엔드포인트 정보
### Event Service (localhost:8080)
#### 1. 이벤트 목록 조회
- **URL**: `GET /api/v1/events`
- **인증**: Bearer Token 필수
- **파라미터**:
- `status`: EventStatus (optional) - DRAFT, PUBLISHED, ENDED
- `search`: String (optional) - 검색어
- `objective`: String (optional) - 목적 필터
- `page`: int (default: 0)
- `size`: int (default: 20)
- `sort`: String (default: createdAt)
- `order`: String (default: desc)
#### 2. 이벤트 상세 조회
- **URL**: `GET /api/v1/events/{eventId}`
- **인증**: Bearer Token 필수
#### 3. 이벤트 생성 (목적 선택)
- **URL**: `POST /api/v1/events/objectives`
- **인증**: Bearer Token 필수
- **Request Body**:
```json
{
"objective": "CUSTOMER_ACQUISITION"
}
```
#### 4. 추가 엔드포인트
- `DELETE /api/v1/events/{eventId}` - 이벤트 삭제
- `POST /api/v1/events/{eventId}/publish` - 이벤트 배포
- `POST /api/v1/events/{eventId}/end` - 이벤트 종료
- `POST /api/v1/events/{eventId}/ai-recommendations` - AI 추천 요청
- `POST /api/v1/events/{eventId}/images` - 이미지 생성 요청
- `PUT /api/v1/events/{eventId}` - 이벤트 수정
---
## 🔍 발견 사항
### ✅ 정상 작동 항목
1. **백엔드 서비스**
- Event-service 정상 실행 (port 8080)
- PostgreSQL 데이터베이스 연결 정상
- API 엔드포인트 정상 응답
- JWT 인증 시스템 작동
2. **프론트엔드 서비스**
- Next.js 개발 서버 정상 실행 (port 3000)
- 페이지 렌더링 정상
- 환경 변수 설정 완료
### ⚠️ 개선 필요 항목
#### 1. 프론트엔드 API 연동 미구현
**현재 상태**:
- `src/app/(main)/events/page.tsx` 파일이 Mock 데이터 사용
- 실제 API 호출 코드 없음
**권장 수정 사항**:
```typescript
// src/entities/event/api/eventApi.ts (신규 생성 필요)
import { apiClient } from '@/shared/api';
export const eventApi = {
getEvents: async (params) => {
const response = await apiClient.get('/api/v1/events', { params });
return response.data;
},
// ... 기타 메서드
};
```
#### 2. API 클라이언트 설정 개선
**현재**:
- `apiClient` 기본 URL이 user-service(8081)를 가리킴
- Event API는 별도 서비스(8080)
**개선 방안**:
```typescript
// 서비스별 클라이언트 분리 또는
// NEXT_PUBLIC_EVENT_HOST 환경 변수 활용
const eventApiClient = axios.create({
baseURL: process.env.NEXT_PUBLIC_EVENT_HOST || 'http://localhost:8080',
// ...
});
```
---
## 📝 테스트 체크리스트
### 완료된 항목 ✅
- [x] 백엔드 서비스 실행 상태 확인
- [x] 프론트엔드 서비스 실행 상태 확인
- [x] Event Service API 직접 호출 테스트
- [x] JWT 인증 토큰 생성 및 테스트
- [x] 환경 변수 설정 (`.env.local`)
- [x] API 응답 형식 확인
- [x] 페이지네이션 동작 확인
- [x] 데이터베이스 연결 확인
### 추가 작업 필요 ⏳
- [ ] 프론트엔드 API 연동 코드 작성
- [ ] Event API 클라이언트 구현
- [ ] React Query 또는 SWR 통합
- [ ] 에러 핸들링 구현
- [ ] 로딩 상태 UI 구현
- [ ] 실제 데이터 렌더링 테스트
- [ ] E2E 테스트 작성
---
## 🎯 다음 단계 권장사항
### 1단계: Event API 클라이언트 작성
```bash
# 파일 생성
src/entities/event/api/eventApi.ts
src/entities/event/model/types.ts
```
### 2단계: React Query 설정
```bash
# useEvents 훅 작성
src/entities/event/model/useEvents.ts
```
### 3단계: 페이지 수정
```bash
# Mock 데이터를 실제 API 호출로 교체
src/app/(main)/events/page.tsx
```
### 4단계: 통합 테스트
- 브라우저에서 실제 데이터 렌더링 확인
- 필터링 및 검색 기능 테스트
- 페이지네이션 동작 확인
---
## 📌 참고 정보
### 테스트 토큰 정보
- User ID: `6db043d0-b303-4577-b9dd-6d366cc59fa0`
- Store ID: `34000028-01fd-4ed1-975c-35f7c88b6547`
- Email: `test@example.com`
- 유효 기간: 2026-10-29까지
### 서비스 포트 매핑
| 서비스 | 포트 | 상태 |
|--------|------|------|
| 프론트엔드 | 3000 | ✅ Running |
| User Service | 8081 | ⚠️ 미확인 |
| Event Service | 8080 | ✅ Running |
| Content Service | 8082 | ⚠️ 미확인 |
| AI Service | 8083 | ⚠️ 미확인 |
| Participation Service | 8084 | ⚠️ 미확인 |
---
## ✨ 결론
**백엔드 API는 정상적으로 작동하고 있으며, 프론트엔드와의 연동을 위한 환경은 준비되었습니다.**
다음 작업은 프론트엔드에서 Mock 데이터를 실제 API 호출로 교체하는 것입니다.

View File

@ -1,24 +0,0 @@
# Multi-stage build for Spring Boot application
FROM eclipse-temurin:21-jre-alpine AS builder
WORKDIR /app
COPY build/libs/*.jar app.jar
RUN java -Djarmode=layertools -jar app.jar extract
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
# Copy layers from builder
COPY --from=builder /app/dependencies/ ./
COPY --from=builder /app/spring-boot-loader/ ./
COPY --from=builder /app/snapshot-dependencies/ ./
COPY --from=builder /app/application/ ./
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=60s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8083/api/v1/ai-service/actuator/health || exit 1
ENTRYPOINT ["java", "org.springframework.boot.loader.launch.JarLauncher"]

View File

@ -1,7 +1,3 @@
bootJar {
archiveFileName = 'ai-service.jar'
}
dependencies {
// Kafka Consumer
implementation 'org.springframework.kafka:spring-kafka'

View File

@ -5,31 +5,31 @@ spring:
# Redis Configuration
data:
redis:
host: ${REDIS_HOST:20.214.210.71}
host: ${REDIS_HOST:redis-external} # Production: redis-external, Local: 20.214.210.71
port: ${REDIS_PORT:6379}
password: ${REDIS_PASSWORD:Hi5Jessica!}
database: ${REDIS_DATABASE:3}
password: ${REDIS_PASSWORD:}
database: ${REDIS_DATABASE:0} # AI Service uses database 3
timeout: ${REDIS_TIMEOUT:3000}
lettuce:
pool:
max-active: ${REDIS_POOL_MAX:8}
max-idle: ${REDIS_POOL_IDLE:8}
min-idle: ${REDIS_POOL_MIN:2}
max-wait: ${REDIS_POOL_WAIT:-1ms}
max-active: 8
max-idle: 8
min-idle: 2
max-wait: -1ms
# Kafka Consumer Configuration
kafka:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS:4.230.50.63:9092}
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS:localhost:9092}
consumer:
group-id: ${KAFKA_CONSUMER_GROUP:ai-service-consumers}
group-id: ai-service-consumers
auto-offset-reset: earliest
enable-auto-commit: false
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties:
spring.json.trusted.packages: "*"
max.poll.records: 10
session.timeout.ms: 30000
max.poll.records: ${KAFKA_MAX_POLL_RECORDS:10}
session.timeout.ms: ${KAFKA_SESSION_TIMEOUT:30000}
listener:
ack-mode: manual
@ -37,7 +37,7 @@ spring:
server:
port: ${SERVER_PORT:8083}
servlet:
context-path: /api/v1/ai-service
context-path: /
encoding:
charset: UTF-8
enabled: true
@ -45,13 +45,13 @@ server:
# JWT Configuration
jwt:
secret: ${JWT_SECRET:kt-event-marketing-secret-key-for-development-only-please-change-in-production}
access-token-validity: ${JWT_ACCESS_TOKEN_VALIDITY:604800000}
secret: ${JWT_SECRET:}
access-token-validity: ${JWT_ACCESS_TOKEN_VALIDITY:1800}
refresh-token-validity: ${JWT_REFRESH_TOKEN_VALIDITY:86400}
# CORS Configuration
cors:
allowed-origins: ${CORS_ALLOWED_ORIGINS:http://localhost:*}
allowed-origins: ${CORS_ALLOWED_ORIGINS:http://localhost:3000,http://localhost:8080}
allowed-methods: ${CORS_ALLOWED_METHODS:GET,POST,PUT,DELETE,OPTIONS,PATCH}
allowed-headers: ${CORS_ALLOWED_HEADERS:*}
allow-credentials: ${CORS_ALLOW_CREDENTIALS:true}
@ -91,39 +91,45 @@ springdoc:
# Logging Configuration
logging:
level:
root: ${LOG_LEVEL_ROOT:INFO}
com.kt.ai: ${LOG_LEVEL_AI:DEBUG}
org.springframework.kafka: ${LOG_LEVEL_KAFKA:INFO}
org.springframework.data.redis: ${LOG_LEVEL_REDIS:INFO}
io.github.resilience4j: ${LOG_LEVEL_RESILIENCE4J:DEBUG}
root: INFO
com.kt.ai: DEBUG
org.springframework.kafka: INFO
org.springframework.data.redis: INFO
io.github.resilience4j: DEBUG
pattern:
console: "%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n"
file: "%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n"
file:
name: ${LOG_FILE_NAME:logs/ai-service.log}
name: ${LOG_FILE:logs/ai-service.log}
logback:
rollingpolicy:
max-file-size: ${LOG_FILE_MAX_SIZE:10MB}
max-history: ${LOG_FILE_MAX_HISTORY:7}
total-size-cap: ${LOG_FILE_TOTAL_CAP:100MB}
max-file-size: 10MB
max-history: 7
total-size-cap: 100MB
# Kafka Topics Configuration
kafka:
topics:
ai-job: ${KAFKA_TOPICS_AI_JOB:ai-event-generation-job}
ai-job-dlq: ${KAFKA_TOPICS_AI_JOB_DLQ:ai-event-generation-job-dlq}
ai-job: ${KAFKA_TOPIC_AI_JOB:ai-event-generation-job}
ai-job-dlq: ${KAFKA_TOPIC_AI_JOB_DLQ:ai-event-generation-job-dlq}
# AI API Configuration (실제 API 사용)
# AI External API Configuration
ai:
provider: ${AI_PROVIDER:CLAUDE}
claude:
api-url: ${AI_CLAUDE_API_URL:https://api.anthropic.com/v1/messages}
api-key: ${AI_CLAUDE_API_KEY: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}
anthropic-version: ${AI_CLAUDE_ANTHROPIC_VERSION:2023-06-01}
model: ${AI_CLAUDE_MODEL:claude-sonnet-4-5-20250929}
max-tokens: ${AI_CLAUDE_MAX_TOKENS:4096}
temperature: ${AI_CLAUDE_TEMPERATURE:0.7}
timeout: ${AI_CLAUDE_TIMEOUT:300000}
api-url: ${CLAUDE_API_URL:https://api.anthropic.com/v1/messages}
api-key: ${CLAUDE_API_KEY:}
anthropic-version: ${CLAUDE_ANTHROPIC_VERSION:2023-06-01}
model: ${CLAUDE_MODEL:claude-3-5-sonnet-20241022}
max-tokens: ${CLAUDE_MAX_TOKENS:4096}
temperature: ${CLAUDE_TEMPERATURE:0.7}
timeout: ${CLAUDE_TIMEOUT:300000} # 5 minutes
gpt4:
api-url: ${GPT4_API_URL:https://api.openai.com/v1/chat/completions}
api-key: ${GPT4_API_KEY:}
model: ${GPT4_MODEL:gpt-4-turbo-preview}
max-tokens: ${GPT4_MAX_TOKENS:4096}
timeout: ${GPT4_TIMEOUT:300000} # 5 minutes
provider: ${AI_PROVIDER:CLAUDE} # CLAUDE or GPT4
# Circuit Breaker Configuration
resilience4j:

View File

@ -12,7 +12,7 @@
<entry key="DB_PASSWORD" value="Hi5Jessica!" />
<!-- JPA Configuration -->
<entry key="DDL_AUTO" value="create" />
<entry key="DDL_AUTO" value="update" />
<entry key="SHOW_SQL" value="true" />
<!-- Redis Configuration -->
@ -24,7 +24,7 @@
<!-- Kafka Configuration (원격 서버) -->
<entry key="KAFKA_ENABLED" value="true" />
<entry key="KAFKA_BOOTSTRAP_SERVERS" value="20.249.182.13:9095,4.217.131.59:9095" />
<entry key="KAFKA_CONSUMER_GROUP_ID" value="analytics-service-consumers-v3" />
<entry key="KAFKA_CONSUMER_GROUP_ID" value="analytics-service-consumers" />
<!-- Sample Data Configuration (MVP Only) -->
<!-- ⚠️ Kafka Producer로 이벤트 발행 (Consumer가 처리) -->
@ -39,7 +39,7 @@
<entry key="JWT_REFRESH_TOKEN_VALIDITY" value="86400" />
<!-- CORS Configuration -->
<entry key="CORS_ALLOWED_ORIGINS" value="http://localhost:*,http://*.nip.io:*" />
<entry key="CORS_ALLOWED_ORIGINS" value="http://localhost:*" />
<!-- Logging Configuration -->
<entry key="LOG_FILE" value="logs/analytics-service.log" />

View File

@ -1,24 +0,0 @@
# Multi-stage build for Spring Boot application
FROM eclipse-temurin:21-jre-alpine AS builder
WORKDIR /app
COPY build/libs/*.jar app.jar
RUN java -Djarmode=layertools -jar app.jar extract
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
# Copy layers from builder
COPY --from=builder /app/dependencies/ ./
COPY --from=builder /app/spring-boot-loader/ ./
COPY --from=builder /app/snapshot-dependencies/ ./
COPY --from=builder /app/application/ ./
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=60s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8086/api/v1/analytics/actuator/health || exit 1
ENTRYPOINT ["java", "org.springframework.boot.loader.launch.JarLauncher"]

View File

@ -1,7 +1,3 @@
bootJar {
archiveFileName = 'analytics-service.jar'
}
dependencies {
// Kafka Consumer
implementation 'org.springframework.kafka:spring-kafka'

View File

@ -1,108 +0,0 @@
# 백엔드-프론트엔드 API 연동 검증 및 수정 결과
**작업일시**: 2025-10-28
**브랜치**: feature/analytics
**작업 범위**: Analytics Service 백엔드 DTO 및 Service 수정
---
## 📝 수정 요약
### 1⃣ 필드명 통일 (프론트엔드 호환)
**목적**: 프론트엔드 Mock 데이터 필드명과 백엔드 Response DTO 필드명 일치
| 수정 전 (백엔드) | 수정 후 (백엔드) | 프론트엔드 |
|-----------------|----------------|-----------|
| `summary.totalParticipants` | `summary.participants` | `summary.participants` ✅ |
| `channelPerformance[].channelName` | `channelPerformance[].channel` | `channelPerformance[].channel` ✅ |
| `roi.totalInvestment` | `roi.totalCost` | `roiDetail.totalCost` ✅ |
### 2⃣ 증감 데이터 추가
**목적**: 프론트엔드에서 요구하는 증감 표시 및 목표값 제공
| 필드 | 타입 | 설명 | 현재 값 |
|-----|------|------|---------|
| `summary.participantsDelta` | `Integer` | 참여자 증감 (이전 기간 대비) | `0` (TODO: 계산 로직 필요) |
| `summary.targetRoi` | `Double` | 목표 ROI (%) | EventStats에서 가져옴 |
---
## 🔧 수정 파일 목록
### DTO (Response 구조 변경)
1. **AnalyticsSummary.java**
- ✅ `totalParticipants``participants`
- ✅ `participantsDelta` 필드 추가
- ✅ `targetRoi` 필드 추가
2. **ChannelSummary.java**
- ✅ `channelName``channel`
3. **RoiSummary.java**
- ✅ `totalInvestment``totalCost`
### Entity (데이터베이스 스키마 변경)
4. **EventStats.java**
- ✅ `targetRoi` 필드 추가 (`BigDecimal`, default: 0)
### Service (비즈니스 로직 수정)
5. **AnalyticsService.java**
- ✅ `.participants()` 사용
- ✅ `.participantsDelta(0)` 추가 (TODO 마킹)
- ✅ `.targetRoi()` 추가
- ✅ `.channel()` 사용
6. **ROICalculator.java**
- ✅ `.totalCost()` 사용
7. **UserAnalyticsService.java**
- ✅ `.participants()` 사용
- ✅ `.participantsDelta(0)` 추가
- ✅ `.channel()` 사용
- ✅ `.totalCost()` 사용
---
## ✅ 검증 결과
### 컴파일 성공
\`\`\`bash
$ ./gradlew analytics-service:compileJava
BUILD SUCCESSFUL in 8s
\`\`\`
---
## 📊 데이터베이스 스키마 변경
### EventStats 테이블
\`\`\`sql
ALTER TABLE event_stats
ADD COLUMN target_roi DECIMAL(10,2) DEFAULT 0.00;
\`\`\`
**⚠️ 주의사항**
- Spring Boot JPA `ddl-auto` 설정에 따라 자동 적용됨
---
## 📌 다음 단계
### 우선순위 HIGH
1. **프론트엔드 API 연동 테스트**
2. **participantsDelta 계산 로직 구현**
3. **targetRoi 데이터 입력** (Event Service 연동)
### 우선순위 MEDIUM
4. 시간대별 분석 구현
5. 참여자 프로필 구현
6. ROI 세분화 구현

View File

@ -63,7 +63,7 @@ public class AnalyticsBatchScheduler {
event.getEventId(), event.getEventTitle());
// refresh=true로 호출하여 캐시 갱신 외부 API 호출
analyticsService.getDashboardData(event.getEventId(), true);
analyticsService.getDashboardData(event.getEventId(), null, null, true);
successCount++;
log.info("✅ 배치 갱신 완료: eventId={}", event.getEventId());
@ -99,7 +99,7 @@ public class AnalyticsBatchScheduler {
for (EventStats event : allEvents) {
try {
analyticsService.getDashboardData(event.getEventId(), true);
analyticsService.getDashboardData(event.getEventId(), null, null, true);
log.debug("초기 데이터 로딩 완료: eventId={}", event.getEventId());
} catch (Exception e) {
log.warn("초기 데이터 로딩 실패: eventId={}, error={}",

View File

@ -17,13 +17,13 @@ import java.util.Map;
* Kafka Consumer 설정
*/
@Configuration
@ConditionalOnProperty(name = "spring.kafka.enabled", havingValue = "true", matchIfMissing = false)
@ConditionalOnProperty(name = "spring.kafka.enabled", havingValue = "true", matchIfMissing = true)
public class KafkaConsumerConfig {
@Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
@Value("${spring.kafka.consumer.group-id:analytics-service-consumers-v3}")
@Value("${spring.kafka.consumer.group-id:analytics-service}")
private String groupId;
@Bean

View File

@ -1,46 +0,0 @@
package com.kt.event.analytics.config;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringSerializer;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import java.util.HashMap;
import java.util.Map;
/**
* Kafka Producer 설정
*
* MVP 전용: SampleDataLoader가 Kafka 이벤트를 발행하기 위해 필요
* 실제 운영: Analytics Service는 순수 Consumer 역할만 수행하므로 Producer 불필요
*
* String 직렬화 방식 사용 (SampleDataLoader가 JSON 문자열을 직접 발행)
*/
@Configuration
@ConditionalOnProperty(name = "spring.kafka.enabled", havingValue = "true", matchIfMissing = false)
public class KafkaProducerConfig {
@Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
@Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.ACKS_CONFIG, "all");
configProps.put(ProducerConfig.RETRIES_CONFIG, 3);
return new DefaultKafkaProducerFactory<>(configProps);
}
@Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}

View File

@ -11,23 +11,19 @@ import jakarta.annotation.PreDestroy;
import jakarta.persistence.EntityManager;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.DeleteConsumerGroupOffsetsResult;
import org.apache.kafka.clients.admin.ListConsumerGroupOffsetsResult;
import org.apache.kafka.common.TopicPartition;
import org.springframework.boot.ApplicationArguments;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.kafka.core.KafkaAdmin;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;
import java.math.BigDecimal;
import java.util.*;
import java.util.concurrent.TimeUnit;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import java.util.UUID;
/**
* 샘플 데이터 로더 (Kafka Producer 방식)
@ -51,7 +47,6 @@ import java.util.concurrent.TimeUnit;
public class SampleDataLoader implements ApplicationRunner {
private final KafkaTemplate<String, String> kafkaTemplate;
private final KafkaAdmin kafkaAdmin;
private final ObjectMapper objectMapper;
private final EventStatsRepository eventStatsRepository;
private final ChannelStatsRepository channelStatsRepository;
@ -61,9 +56,6 @@ public class SampleDataLoader implements ApplicationRunner {
private final Random random = new Random();
@Value("${spring.kafka.consumer.group-id}")
private String consumerGroupId;
// Kafka Topic Names (MVP용 샘플 토픽)
private static final String EVENT_CREATED_TOPIC = "sample.event.created";
private static final String PARTICIPANT_REGISTERED_TOPIC = "sample.participant.registered";
@ -93,9 +85,9 @@ public class SampleDataLoader implements ApplicationRunner {
// Redis 멱등성 삭제 (새로운 이벤트 처리를 위해)
log.info("Redis 멱등성 키 삭제 중...");
redisTemplate.delete("processed_events_v2");
redisTemplate.delete("distribution_completed_v2");
redisTemplate.delete("processed_participants_v2");
redisTemplate.delete("processed_events");
redisTemplate.delete("distribution_completed");
redisTemplate.delete("processed_participants");
log.info("✅ Redis 멱등성 키 삭제 완료");
try {
@ -111,8 +103,6 @@ public class SampleDataLoader implements ApplicationRunner {
// 3. ParticipantRegistered 이벤트 발행 ( 이벤트당 다수 참여자)
publishParticipantRegisteredEvents();
log.info("⏳ 참여자 등록 이벤트 처리 대기 중... (20초)");
Thread.sleep(20000); // ParticipantRegisteredConsumer가 180개 이벤트 처리할 시간 (비관적 고려)
log.info("========================================");
log.info("🎉 Kafka 이벤트 발행 완료! (Consumer가 처리 중...)");
@ -137,17 +127,16 @@ public class SampleDataLoader implements ApplicationRunner {
}
/**
* 서비스 종료 전체 데이터 삭제 Consumer Offset 리셋
* 서비스 종료 전체 데이터 삭제
*/
@PreDestroy
@Transactional
public void onShutdown() {
log.info("========================================");
log.info("🛑 서비스 종료: PostgreSQL 전체 데이터 삭제 + Kafka Consumer Offset 리셋");
log.info("🛑 서비스 종료: PostgreSQL 전체 데이터 삭제");
log.info("========================================");
try {
// 1. PostgreSQL 데이터 삭제
long timelineCount = timelineDataRepository.count();
long channelCount = channelStatsRepository.count();
long eventCount = eventStatsRepository.count();
@ -164,10 +153,6 @@ public class SampleDataLoader implements ApplicationRunner {
entityManager.clear();
log.info("✅ 모든 샘플 데이터 삭제 완료!");
// 2. Kafka Consumer Offset 리셋 (다음 시작 처음부터 읽도록)
resetConsumerOffsets();
log.info("========================================");
} catch (Exception e) {
@ -175,85 +160,37 @@ public class SampleDataLoader implements ApplicationRunner {
}
}
/**
* Kafka Consumer Group Offset 리셋
*
* 서비스 종료 Consumer offset을 삭제하여 다음 시작
* auto.offset.reset=earliest 설정에 따라 처음부터 읽도록
*/
private void resetConsumerOffsets() {
try (AdminClient adminClient = AdminClient.create(kafkaAdmin.getConfigurationProperties())) {
log.info("🔄 Kafka Consumer Offset 리셋 시작: group={}", consumerGroupId);
// 모든 토픽의 offset 삭제
Set<TopicPartition> partitions = new HashSet<>();
// 토픽별 파티션 추가 (설계서상 토픽은 3개 파티션)
for (int i = 0; i < 3; i++) {
partitions.add(new TopicPartition(EVENT_CREATED_TOPIC, i));
partitions.add(new TopicPartition(PARTICIPANT_REGISTERED_TOPIC, i));
partitions.add(new TopicPartition(DISTRIBUTION_COMPLETED_TOPIC, i));
}
// Consumer Group Offset 삭제
DeleteConsumerGroupOffsetsResult result = adminClient.deleteConsumerGroupOffsets(
consumerGroupId,
partitions
);
// 완료 대기 (최대 10초)
result.all().get(10, TimeUnit.SECONDS);
log.info("✅ Kafka Consumer Offset 리셋 완료!");
log.info(" → 다음 시작 시 처음부터(earliest) 메시지를 읽습니다.");
} catch (Exception e) {
// Offset 리셋 실패는 치명적이지 않으므로 경고만 출력
log.warn("⚠️ Kafka Consumer Offset 리셋 실패 (무시 가능): {}", e.getMessage());
log.warn(" → 수동으로 Consumer Group ID를 변경하거나, Kafka 도구로 offset을 삭제하세요.");
}
}
/**
* EventCreated 이벤트 발행
*/
private void publishEventCreatedEvents() throws Exception {
// 이벤트 1: 신년맞이 할인 이벤트 (진행중, 높은 성과 - ROI 200%)
// 이벤트 1: 신년맞이 할인 이벤트 (진행중, 높은 성과)
EventCreatedEvent event1 = EventCreatedEvent.builder()
.eventId("evt_2025012301")
.eventTitle("신년맞이 20% 할인 이벤트")
.storeId("store_001")
.totalInvestment(new BigDecimal("5000000"))
.expectedRevenue(new BigDecimal("15000000")) // 투자 대비 3배 수익
.status("ACTIVE")
.startDate(java.time.LocalDateTime.of(2025, 1, 23, 0, 0)) // 2025-01-23 시작
.endDate(null) // 진행중
.build();
publishEvent(EVENT_CREATED_TOPIC, event1);
// 이벤트 2: 설날 특가 이벤트 (진행중, 중간 성과 - ROI 100%)
// 이벤트 2: 설날 특가 이벤트 (진행중, 중간 성과)
EventCreatedEvent event2 = EventCreatedEvent.builder()
.eventId("evt_2025020101")
.eventTitle("설날 특가 선물세트 이벤트")
.storeId("store_001")
.totalInvestment(new BigDecimal("3500000"))
.expectedRevenue(new BigDecimal("7000000")) // 투자 대비 2배 수익
.status("ACTIVE")
.startDate(java.time.LocalDateTime.of(2025, 2, 1, 0, 0)) // 2025-02-01 시작
.endDate(null) // 진행중
.build();
publishEvent(EVENT_CREATED_TOPIC, event2);
// 이벤트 3: 겨울 신메뉴 런칭 이벤트 (종료, 저조한 성과 - ROI 50%)
// 이벤트 3: 겨울 신메뉴 런칭 이벤트 (종료, 저조한 성과)
EventCreatedEvent event3 = EventCreatedEvent.builder()
.eventId("evt_2025011501")
.eventTitle("겨울 신메뉴 런칭 이벤트")
.storeId("store_001")
.totalInvestment(new BigDecimal("2000000"))
.expectedRevenue(new BigDecimal("3000000")) // 투자 대비 1.5배 수익
.status("COMPLETED")
.startDate(java.time.LocalDateTime.of(2025, 1, 15, 0, 0)) // 2025-01-15 시작
.endDate(java.time.LocalDateTime.of(2025, 1, 31, 23, 59)) // 2025-01-31 종료
.build();
publishEvent(EVENT_CREATED_TOPIC, event3);
@ -271,63 +208,42 @@ public class SampleDataLoader implements ApplicationRunner {
{1500, 3000, 1000, 500} // 이벤트3
};
// 이벤트의 투자 금액
BigDecimal[] totalInvestments = {
new BigDecimal("5000000"), // 이벤트1: 500만원
new BigDecimal("3500000"), // 이벤트2: 350만원
new BigDecimal("2000000") // 이벤트3: 200만원
};
// 채널 배포는 투자의 50% 사용 (나머지는 경품/콘텐츠/운영비용)
double channelBudgetRatio = 0.50;
// 채널별 비용 비율 (채널 예산 내에서: 우리동네TV 30%, 지니TV 30%, 링고비즈 25%, SNS 15%)
double[] costRatios = {0.30, 0.30, 0.25, 0.15};
for (int i = 0; i < eventIds.length; i++) {
String eventId = eventIds[i];
BigDecimal totalInvestment = totalInvestments[i];
// 채널 배포 예산: 투자의 50%
BigDecimal channelBudget = totalInvestment.multiply(BigDecimal.valueOf(channelBudgetRatio));
// 4개 채널을 배열로 구성
List<DistributionCompletedEvent.ChannelDistribution> channels = new ArrayList<>();
// 1. 우리동네TV (TV) - 채널 예산의 30%
// 1. 우리동네TV (TV)
channels.add(DistributionCompletedEvent.ChannelDistribution.builder()
.channel("우리동네TV")
.channelType("TV")
.status("SUCCESS")
.expectedViews(expectedViews[i][0])
.distributionCost(channelBudget.multiply(BigDecimal.valueOf(costRatios[0])))
.build());
// 2. 지니TV (TV) - 채널 예산의 30%
// 2. 지니TV (TV)
channels.add(DistributionCompletedEvent.ChannelDistribution.builder()
.channel("지니TV")
.channelType("TV")
.status("SUCCESS")
.expectedViews(expectedViews[i][1])
.distributionCost(channelBudget.multiply(BigDecimal.valueOf(costRatios[1])))
.build());
// 3. 링고비즈 (CALL) - 채널 예산의 25%
// 3. 링고비즈 (CALL)
channels.add(DistributionCompletedEvent.ChannelDistribution.builder()
.channel("링고비즈")
.channelType("CALL")
.status("SUCCESS")
.expectedViews(expectedViews[i][2])
.distributionCost(channelBudget.multiply(BigDecimal.valueOf(costRatios[2])))
.build());
// 4. SNS (SNS) - 채널 예산의 15%
// 4. SNS (SNS)
channels.add(DistributionCompletedEvent.ChannelDistribution.builder()
.channel("SNS")
.channelType("SNS")
.status("SUCCESS")
.expectedViews(expectedViews[i][3])
.distributionCost(channelBudget.multiply(BigDecimal.valueOf(costRatios[3])))
.build());
// 이벤트 발행 (채널 배열 포함)
@ -345,53 +261,22 @@ public class SampleDataLoader implements ApplicationRunner {
/**
* ParticipantRegistered 이벤트 발행
*
* 현실적인 참여 패턴 반영:
* - 120명의 고유 참여자 생성
* - 일부 참여자는 여러 이벤트에 중복 참여
* - 이벤트1: 100명 (user001~user100)
* - 이벤트2: 50명 (user051~user100) 50명이 이벤트1과 중복
* - 이벤트3: 30명 (user071~user100) 30명이 이전 이벤트들과 중복
*/
private void publishParticipantRegisteredEvents() throws Exception {
String[] eventIds = {"evt_2025012301", "evt_2025020101", "evt_2025011501"};
int[] totalParticipants = {100, 50, 30}; // MVP 테스트용 샘플 데이터 ( 180명)
String[] channels = {"우리동네TV", "지니TV", "링고비즈", "SNS"};
// 이벤트별 참여자 범위 (중복 참여 반영)
int[][] participantRanges = {
{1, 100}, // 이벤트1: user001~user100 (100명)
{51, 100}, // 이벤트2: user051~user100 (50명, 이벤트1과 50명 중복)
{71, 100} // 이벤트3: user071~user100 (30명, 모두 중복)
};
int totalPublished = 0;
for (int i = 0; i < eventIds.length; i++) {
String eventId = eventIds[i];
int startUser = participantRanges[i][0];
int endUser = participantRanges[i][1];
int eventParticipants = endUser - startUser + 1;
int participants = totalParticipants[i];
log.info("이벤트 {} 참여자 발행 시작: user{:03d}~user{:03d} ({}명)",
eventId, startUser, endUser, eventParticipants);
// 참여자에 대해 ParticipantRegistered 이벤트 발행
for (int userId = startUser; userId <= endUser; userId++) {
String participantId = String.format("user%03d", userId); // user001, user002, ...
// 채널별 가중치 기반 랜덤 배정
// SNS: 45%, 우리동네TV: 25%, 지니TV: 20%, 링고비즈: 10%
int randomValue = random.nextInt(100);
String channel;
if (randomValue < 45) {
channel = "SNS"; // 0~44: 45%
} else if (randomValue < 70) {
channel = "우리동네TV"; // 45~69: 25%
} else if (randomValue < 90) {
channel = "지니TV"; // 70~89: 20%
} else {
channel = "링고비즈"; // 90~99: 10%
}
// 이벤트에 대해 참여자 수만큼 ParticipantRegistered 이벤트 발행
for (int j = 0; j < participants; j++) {
String participantId = UUID.randomUUID().toString();
String channel = channels[j % channels.length]; // 채널 순환 배정
ParticipantRegisteredEvent event = ParticipantRegisteredEvent.builder()
.eventId(eventId)
@ -403,38 +288,19 @@ public class SampleDataLoader implements ApplicationRunner {
totalPublished++;
// 동시성 충돌 방지: 10개마다 100ms 대기
if (totalPublished % 10 == 0) {
if ((j + 1) % 10 == 0) {
Thread.sleep(100);
}
}
log.info("✅ 이벤트 {} 참여자 발행 완료: {}명", eventId, eventParticipants);
}
log.info("========================================");
log.info("✅ ParticipantRegistered 이벤트 {}건 발행 완료", totalPublished);
log.info("📊 참여 패턴:");
log.info(" - 총 고유 참여자: 100명 (user001~user100)");
log.info(" - 이벤트1 참여: 100명");
log.info(" - 이벤트2 참여: 50명 (이벤트1과 50명 중복)");
log.info(" - 이벤트3 참여: 30명 (이벤트1,2와 모두 중복)");
log.info(" - 3개 이벤트 모두 참여: 30명");
log.info(" - 2개 이벤트 참여: 20명");
log.info(" - 1개 이벤트만 참여: 50명");
log.info("📺 채널별 참여 비율 (가중치):");
log.info(" - SNS: 45% (가장 높음)");
log.info(" - 우리동네TV: 25%");
log.info(" - 지니TV: 20%");
log.info(" - 링고비즈: 10%");
log.info("========================================");
}
/**
* TimelineData 생성 (시간대별 샘플 데이터)
*
* - 이벤트마다 30일 × 24시간 = 720시간 hourly 데이터 생성
* - interval=hourly: 시간별 표시 (최근 7일 적합)
* - interval=daily: 일별 자동 집계 (30일 전체)
* - 이벤트마다 30일 daily 데이터 생성
* - 참여자 , 조회수, 참여행동, 전환수, 누적 참여자
*/
private void createTimelineData() {
@ -442,63 +308,52 @@ public class SampleDataLoader implements ApplicationRunner {
String[] eventIds = {"evt_2025012301", "evt_2025020101", "evt_2025011501"};
// 이벤트별 시간당 기준 참여자 (이벤트 성과에 따라 다름)
int[] baseParticipantsPerHour = {4, 2, 1}; // 이벤트1(높음), 이벤트2(중간), 이벤트3(낮음)
// 이벤트별 기준 참여자 (이벤트 성과에 따라 다름)
int[] baseParticipants = {20, 12, 5}; // 이벤트1(높음), 이벤트2(중간), 이벤트3(낮음)
for (int eventIndex = 0; eventIndex < eventIds.length; eventIndex++) {
String eventId = eventIds[eventIndex];
int baseParticipant = baseParticipantsPerHour[eventIndex];
int baseParticipant = baseParticipants[eventIndex];
int cumulativeParticipants = 0;
// 이벤트 ID에서 날짜 파싱 (evt_2025012301 2025-01-23)
String dateStr = eventId.substring(4); // "2025012301"
int year = Integer.parseInt(dateStr.substring(0, 4)); // 2025
int month = Integer.parseInt(dateStr.substring(4, 6)); // 01
int day = Integer.parseInt(dateStr.substring(6, 8)); // 23
// 30일 데이터 생성 (2024-09-24부터)
java.time.LocalDateTime startDate = java.time.LocalDateTime.of(2024, 9, 24, 0, 0);
// 이벤트 시작일부터 30일 hourly 데이터 생성
java.time.LocalDateTime startDate = java.time.LocalDateTime.of(year, month, day, 0, 0);
for (int day = 0; day < 30; day++) {
java.time.LocalDateTime timestamp = startDate.plusDays(day);
for (int dayOffset = 0; dayOffset < 30; dayOffset++) {
for (int hour = 0; hour < 24; hour++) {
java.time.LocalDateTime timestamp = startDate.plusDays(dayOffset).plusHours(hour);
// 시간대별 참여자 변화 ( 시간대 12~20시에 많음)
int hourMultiplier = (hour >= 12 && hour <= 20) ? 2 : 1;
int hourlyParticipants = (baseParticipant * hourMultiplier) + random.nextInt(baseParticipant + 1);
cumulativeParticipants += hourlyParticipants;
// 랜덤한 참여자 생성 (기준값 ± 50%)
int dailyParticipants = baseParticipant + random.nextInt(baseParticipant + 1);
cumulativeParticipants += dailyParticipants;
// 조회수는 참여자의 3~5배
int hourlyViews = hourlyParticipants * (3 + random.nextInt(3));
int dailyViews = dailyParticipants * (3 + random.nextInt(3));
// 참여행동은 참여자의 1~2배
int hourlyEngagement = hourlyParticipants * (1 + random.nextInt(2));
int dailyEngagement = dailyParticipants * (1 + random.nextInt(2));
// 전환수는 참여자의 50~80%
int hourlyConversions = (int) (hourlyParticipants * (0.5 + random.nextDouble() * 0.3));
int dailyConversions = (int) (dailyParticipants * (0.5 + random.nextDouble() * 0.3));
// TimelineData 생성
com.kt.event.analytics.entity.TimelineData timelineData =
com.kt.event.analytics.entity.TimelineData.builder()
.eventId(eventId)
.timestamp(timestamp)
.participants(hourlyParticipants)
.views(hourlyViews)
.engagement(hourlyEngagement)
.conversions(hourlyConversions)
.participants(dailyParticipants)
.views(dailyViews)
.engagement(dailyEngagement)
.conversions(dailyConversions)
.cumulativeParticipants(cumulativeParticipants)
.build();
timelineDataRepository.save(timelineData);
}
log.info("✅ TimelineData 생성 완료: eventId={}, 30일 데이터", eventId);
}
log.info("✅ TimelineData 생성 완료: eventId={}, 시작일={}-{:02d}-{:02d}, 30일 × 24시간 = 720건",
eventId, year, month, day);
}
log.info("✅ 전체 TimelineData 생성 완료: 3개 이벤트 × 30일 × 24시간 = 2,160건");
log.info("✅ 전체 TimelineData 생성 완료: 3개 이벤트 × 30일 = 90건");
}
/**

View File

@ -32,18 +32,30 @@ public class AnalyticsDashboardController {
* 성과 대시보드 조회
*
* @param eventId 이벤트 ID
* @param startDate 조회 시작 날짜
* @param endDate 조회 종료 날짜
* @param refresh 캐시 갱신 여부
* @return 성과 대시보드 (이벤트 시작일 ~ 현재까지)
* @return 성과 대시보드
*/
@Operation(
summary = "성과 대시보드 조회",
description = "이벤트의 전체 성과를 통합하여 조회합니다. (이벤트 시작일 ~ 현재까지)"
description = "이벤트의 전체 성과를 통합하여 조회합니다."
)
@GetMapping("/{eventId}/analytics")
public ResponseEntity<ApiResponse<AnalyticsDashboardResponse>> getEventAnalytics(
@Parameter(description = "이벤트 ID", required = true)
@PathVariable String eventId,
@Parameter(description = "조회 시작 날짜 (ISO 8601 format)")
@RequestParam(required = false)
@DateTimeFormat(iso = DateTimeFormat.ISO.DATE_TIME)
LocalDateTime startDate,
@Parameter(description = "조회 종료 날짜 (ISO 8601 format)")
@RequestParam(required = false)
@DateTimeFormat(iso = DateTimeFormat.ISO.DATE_TIME)
LocalDateTime endDate,
@Parameter(description = "캐시 갱신 여부 (true인 경우 외부 API 호출)")
@RequestParam(required = false, defaultValue = "false")
Boolean refresh
@ -51,7 +63,7 @@ public class AnalyticsDashboardController {
log.info("성과 대시보드 조회 API 호출: eventId={}, refresh={}", eventId, refresh);
AnalyticsDashboardResponse response = analyticsService.getDashboardData(
eventId, refresh
eventId, startDate, endDate, refresh
);
return ResponseEntity.ok(ApiResponse.success(response));

View File

@ -1,75 +0,0 @@
package com.kt.event.analytics.controller;
import com.kt.event.analytics.config.SampleDataLoader;
import com.kt.event.common.dto.ApiResponse;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.tags.Tag;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.boot.ApplicationArguments;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* 디버그 컨트롤러
*
* 개발/테스트 전용
*/
@Tag(name = "Debug", description = "디버그 API (개발/테스트 전용)")
@Slf4j
@RestController
@RequestMapping("/api/debug")
@RequiredArgsConstructor
public class DebugController {
private final SampleDataLoader sampleDataLoader;
/**
* 샘플 데이터 수동 생성
*/
@Operation(
summary = "샘플 데이터 수동 생성",
description = "SampleDataLoader를 수동으로 실행하여 샘플 데이터를 생성합니다."
)
@PostMapping("/reload-sample-data")
public ResponseEntity<ApiResponse<String>> reloadSampleData() {
try {
log.info("🔧 수동으로 샘플 데이터 생성 요청");
// SampleDataLoader 실행
sampleDataLoader.run(new ApplicationArguments() {
@Override
public String[] getSourceArgs() {
return new String[0];
}
@Override
public java.util.Set<String> getOptionNames() {
return java.util.Collections.emptySet();
}
@Override
public boolean containsOption(String name) {
return false;
}
@Override
public java.util.List<String> getOptionValues(String name) {
return null;
}
@Override
public java.util.List<String> getNonOptionArgs() {
return java.util.Collections.emptyList();
}
});
return ResponseEntity.ok(ApiResponse.success("샘플 데이터 생성 완료"));
} catch (Exception e) {
log.error("❌ 샘플 데이터 생성 실패", e);
return ResponseEntity.ok(ApiResponse.success("샘플 데이터 생성 실패: " + e.getMessage()));
}
}
}

View File

@ -35,12 +35,14 @@ public class TimelineAnalyticsController {
*
* @param eventId 이벤트 ID
* @param interval 시간 간격 단위
* @param startDate 조회 시작 날짜
* @param endDate 조회 종료 날짜
* @param metrics 조회할 지표 목록
* @return 시간대별 참여 추이 (이벤트 시작일 ~ 현재까지)
* @return 시간대별 참여
*/
@Operation(
summary = "시간대별 참여 추이",
description = "이벤트 기간 동안의 시간대별 참여 추이를 분석합니다. (이벤트 시작일 ~ 현재까지)"
description = "이벤트 기간 동안의 시간대별 참여 추이를 분석합니다."
)
@GetMapping("/{eventId}/analytics/timeline")
public ResponseEntity<ApiResponse<TimelineAnalyticsResponse>> getTimelineAnalytics(
@ -51,6 +53,16 @@ public class TimelineAnalyticsController {
@RequestParam(required = false, defaultValue = "daily")
String interval,
@Parameter(description = "조회 시작 날짜 (ISO 8601 format)")
@RequestParam(required = false)
@DateTimeFormat(iso = DateTimeFormat.ISO.DATE_TIME)
LocalDateTime startDate,
@Parameter(description = "조회 종료 날짜 (ISO 8601 format)")
@RequestParam(required = false)
@DateTimeFormat(iso = DateTimeFormat.ISO.DATE_TIME)
LocalDateTime endDate,
@Parameter(description = "조회할 지표 목록 (쉼표로 구분)")
@RequestParam(required = false)
String metrics
@ -62,7 +74,7 @@ public class TimelineAnalyticsController {
: null;
TimelineAnalyticsResponse response = timelineAnalyticsService.getTimelineAnalytics(
eventId, interval, metricList
eventId, interval, startDate, endDate, metricList
);
return ResponseEntity.ok(ApiResponse.success(response));

Some files were not shown because too many files have changed in this diff Show More