Swagger 관련 변경사항 롤백 및 정리

This commit is contained in:
sunmingLee 2025-10-24 11:04:59 +09:00
parent ea82ff4748
commit 28a7a91ca2
40 changed files with 2184 additions and 0 deletions

Binary file not shown.

View File

@ -0,0 +1,20 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="DistributionServiceApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot" nameIsGenerated="true">
<option name="ACTIVE_PROFILES" />
<module name="kt-event-marketing.distribution-service.main" />
<option name="SPRING_BOOT_MAIN_CLASS" value="com.kt.distribution.DistributionApplication" />
<extension name="coverage">
<pattern>
<option name="PATTERN" value="com.kt.distribution.*" />
<option name="ENABLED" value="true" />
</pattern>
</extension>
<envs>
<env name="KAFKA_BOOTSTRAP_SERVERS" value="localhost:9092" />
<env name="KAFKA_CONSUMER_GROUP" value="distribution-service" />
</envs>
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component>

175
claude/make-run-profile.md Normal file
View File

@ -0,0 +1,175 @@
# 서비스실행파일작성가이드
[요청사항]
- <수행원칙>을 준용하여 수행
- <수행순서>에 따라 수행
- [결과파일] 안내에 따라 파일 작성
[가이드]
<수행원칙>
- 설정 Manifest(src/main/resources/application*.yml)의 각 항목의 값은 하드코딩하지 않고 환경변수 처리
- Kubernetes에 배포된 데이터베이스는 LoadBalacer유형의 Service를 만들어 연결
- MQ 이용 시 'MQ설치결과서'의 연결 정보를 실행 프로파일의 환경변수로 등록
<수행순서>
- 준비:
- 데이터베이스설치결과서(develop/database/exec/db-exec-dev.md) 분석
- 캐시설치결과서(develop/database/exec/cache-exec-dev.md) 분석
- MQ설치결과서(develop/mq/mq-exec-dev.md) 분석 - 연결 정보 확인
- kubectl get svc -n tripgen-dev | grep LoadBalancer 실행하여 External IP 목록 확인
- 실행:
- 각 서비스별를 서브에이젼트로 병렬 수행
- 설정 Manifest 수정
- 하드코딩 되어 있는 값이 있으면 환경변수로 변환
- 특히, 데이터베이스, MQ 등의 연결 정보는 반드시 환경변수로 변환해야 함
- 민감한 정보의 디퐅트값은 생략하거나 간략한 값으로 지정
- '<로그설정>'을 참조하여 Log 파일 설정
- '<실행프로파일 작성 가이드>'에 따라 서비스 실행프로파일 작성
- LoadBalancer External IP를 DB_HOST, REDIS_HOST로 설정
- MQ 연결 정보를 application.yml의 환경변수명에 맞춰 설정
- 서비스 실행 및 오류 수정
- 'IntelliJ서비스실행기'를 'tools' 디렉토리에 다운로드
- python 또는 python3 명령으로 백그라우드로 실행하고 결과 로그를 분석
nohup python3 tools/run-intellij-service-profile.py {service-name} > logs/{service-name}.log 2>&1 & echo "Started {service-name} with PID: $!"
- 서비스 실행은 다른 방법 사용하지 말고 **반드시 python 프로그램 이용**
- 오류 수정 후 필요 시 실행파일의 환경변수를 올바르게 변경
- 서비스 정상 시작 확인 후 서비스 중지
- 결과: {service-name}/.run
<서비스 중지 방법>
- Window
- netstat -ano | findstr :{PORT}
- powershell "Stop-Process -Id {Process number} -Force"
- Linux/Mac
- netstat -ano | grep {PORT}
- kill -9 {Process number}
<로그설정>
- **application.yml 로그 파일 설정**:
```yaml
logging:
file:
name: ${LOG_FILE:logs/trip-service.log}
logback:
rollingpolicy:
max-file-size: 10MB
max-history: 7
total-size-cap: 100MB
```
<실행프로파일 작성 가이드>
- {service-name}/.run/{service-name}.run.xml 파일로 작성
- Spring Boot가 아니고 **Gradle 실행 프로파일**이어야 함: '[실행프로파일 예시]' 참조
- Kubernetes에 배포된 데이터베이스의 LoadBalancer Service 확인:
- kubectl get svc -n {namespace} | grep LoadBalancer 명령으로 LoadBalancer IP 확인
- 각 서비스별 데이터베이스의 LoadBalancer External IP를 DB_HOST로 사용
- 캐시(Redis)의 LoadBalancer External IP를 REDIS_HOST로 사용
- MQ 연결 설정:
- MQ설치결과서(develop/mq/mq-exec-dev.md)에서 연결 정보 확인
- MQ 유형에 따른 연결 정보 설정 예시:
- RabbitMQ: RABBITMQ_HOST, RABBITMQ_PORT, RABBITMQ_USERNAME, RABBITMQ_PASSWORD
- Kafka: KAFKA_BOOTSTRAP_SERVERS, KAFKA_SECURITY_PROTOCOL
- Azure Service Bus: SERVICE_BUS_CONNECTION_STRING
- AWS SQS: AWS_REGION, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
- Redis (Pub/Sub): REDIS_HOST, REDIS_PORT, REDIS_PASSWORD
- ActiveMQ: ACTIVEMQ_BROKER_URL, ACTIVEMQ_USER, ACTIVEMQ_PASSWORD
- 기타 MQ: 해당 MQ의 연결에 필요한 호스트, 포트, 인증정보, 연결문자열 등을 환경변수로 설정
- application.yml에 정의된 환경변수명 확인 후 매핑
- 백킹서비스 연결 정보 매핑:
- 데이터베이스설치결과서에서 각 서비스별 DB 인증 정보 확인
- 캐시설치결과서에서 각 서비스별 Redis 인증 정보 확인
- LoadBalancer의 External IP를 호스트로 사용 (내부 DNS 아님)
- 개발모드의 DDL_AUTO값은 update로 함
- JWT Secret Key는 모든 서비스가 동일해야 함
- application.yaml의 환경변수와 일치하도록 환경변수 설정
- application.yaml의 민감 정보는 기본값으로 지정하지 않고 실제 백킹서비스 정보로 지정
- 백킹서비스 연결 확인 결과를 바탕으로 정확한 값을 지정
- 기존에 파일이 있으면 내용을 분석하여 항목 추가/수정/삭제
[실행프로파일 예시]
```
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="user-service" type="GradleRunConfiguration" factoryName="Gradle">
<ExternalSystemSettings>
<option name="env">
<map>
<entry key="ACCOUNT_LOCK_DURATION_MINUTES" value="30" />
<entry key="CACHE_TTL" value="1800" />
<entry key="DB_HOST" value="20.249.197.193" /> <!-- LoadBalancer External IP 사용 -->
<entry key="DB_NAME" value="tripgen_user_db" />
<entry key="DB_PASSWORD" value="tripgen_user_123" />
<entry key="DB_PORT" value="5432" />
<entry key="DB_USERNAME" value="tripgen_user" />
<entry key="FILE_BASE_URL" value="http://localhost:8081" />
<entry key="FILE_MAX_SIZE" value="5242880" />
<entry key="FILE_UPLOAD_PATH" value="/app/uploads" />
<entry key="JPA_DDL_AUTO" value="update" />
<entry key="JPA_SHOW_SQL" value="true" />
<entry key="JWT_ACCESS_TOKEN_EXPIRATION" value="86400" />
<entry key="JWT_REFRESH_TOKEN_EXPIRATION" value="604800" />
<entry key="JWT_SECRET" value="dev-jwt-secret-key-for-development-only" />
<entry key="LOG_LEVEL_APP" value="DEBUG" />
<entry key="LOG_LEVEL_ROOT" value="INFO" />
<entry key="LOG_LEVEL_SECURITY" value="DEBUG" />
<entry key="MAX_LOGIN_ATTEMPTS" value="5" />
<entry key="PASSWORD_MIN_LENGTH" value="8" />
<entry key="REDIS_DATABASE" value="0" />
<entry key="REDIS_HOST" value="20.214.121.28" /> <!-- Redis LoadBalancer External IP 사용 -->
<entry key="REDIS_PASSWORD" value="" />
<entry key="REDIS_PORT" value="6379" />
<entry key="SERVER_PORT" value="8081" />
<entry key="SPRING_PROFILES_ACTIVE" value="dev" />
<!-- MQ 사용하는 서비스의 경우 MQ 유형에 맞게 추가 -->
<!-- Azure Service Bus 예시 -->
<entry key="SERVICE_BUS_CONNECTION_STRING" value="Endpoint=sb://...;SharedAccessKeyName=...;SharedAccessKey=..." />
<!-- RabbitMQ 예시 -->
<entry key="RABBITMQ_HOST" value="20.xxx.xxx.xxx" />
<entry key="RABBITMQ_PORT" value="5672" />
<!-- Kafka 예시 -->
<entry key="KAFKA_BOOTSTRAP_SERVERS" value="20.xxx.xxx.xxx:9092" />
<!-- 기타 MQ의 경우 해당 MQ에 필요한 연결 정보를 환경변수로 추가 -->
</map>
</option>
<option name="executionName" />
<option name="externalProjectPath" value="$PROJECT_DIR$" />
<option name="externalSystemIdString" value="GRADLE" />
<option name="scriptParameters" value="" />
<option name="taskDescriptions">
<list />
</option>
<option name="taskNames">
<list>
<option value="user-service:bootRun" />
</list>
</option>
<option name="vmOptions" />
</ExternalSystemSettings>
<ExternalSystemDebugServerProcess>true</ExternalSystemDebugServerProcess>
<ExternalSystemReattachDebugProcess>true</ExternalSystemReattachDebugProcess>
<EXTENSION ID="com.intellij.execution.ExternalSystemRunConfigurationJavaExtension">
<extension name="net.ashald.envfile">
<option name="IS_ENABLED" value="false" />
<option name="IS_SUBST" value="false" />
<option name="IS_PATH_MACRO_SUPPORTED" value="false" />
<option name="IS_IGNORE_MISSING_FILES" value="false" />
<option name="IS_ENABLE_EXPERIMENTAL_INTEGRATIONS" value="false" />
<ENTRIES>
<ENTRY IS_ENABLED="true" PARSER="runconfig" IS_EXECUTABLE="false" />
</ENTRIES>
</extension>
</EXTENSION>
<DebugAllEnabled>false</DebugAllEnabled>
<RunAsTest>false</RunAsTest>
<method v="2" />
</configuration>
</component>
```
[참고자료]
- 데이터베이스설치결과서: develop/database/exec/db-exec-dev.md
- 각 서비스별 DB 연결 정보 (사용자명, 비밀번호, DB명)
- LoadBalancer Service External IP 목록
- 캐시설치결과서: develop/database/exec/cache-exec-dev.md
- 각 서비스별 Redis 연결 정보
- LoadBalancer Service External IP 목록
- MQ설치결과서: develop/mq/mq-exec-dev.md
- MQ 유형 및 연결 정보
- 연결에 필요한 호스트, 포트, 인증 정보
- LoadBalancer Service External IP (해당하는 경우)

View File

@ -0,0 +1,51 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="distribution-service" type="GradleRunConfiguration" factoryName="Gradle">
<ExternalSystemSettings>
<option name="env">
<map>
<entry key="KAFKA_ENABLED" value="true" />
<entry key="KAFKA_BOOTSTRAP_SERVERS" value="4.230.50.63:9092" />
<entry key="KAFKA_CONSUMER_GROUP" value="distribution-service" />
<entry key="LOG_FILE" value="logs/distribution-service.log" />
<entry key="SERVER_PORT" value="8085" />
<entry key="URIDONGNETV_API_URL" value="http://localhost:9001/api/uridongnetv" />
<entry key="RINGOBIZ_API_URL" value="http://localhost:9002/api/ringobiz" />
<entry key="GINITV_API_URL" value="http://localhost:9003/api/ginitv" />
<entry key="INSTAGRAM_API_URL" value="http://localhost:9004/api/instagram" />
<entry key="NAVER_API_URL" value="http://localhost:9005/api/naver" />
<entry key="KAKAO_API_URL" value="http://localhost:9006/api/kakao" />
</map>
</option>
<option name="executionName" />
<option name="externalProjectPath" value="$PROJECT_DIR$" />
<option name="externalSystemIdString" value="GRADLE" />
<option name="scriptParameters" value="" />
<option name="taskDescriptions">
<list />
</option>
<option name="taskNames">
<list>
<option value="distribution-service:bootRun" />
</list>
</option>
<option name="vmOptions" />
</ExternalSystemSettings>
<ExternalSystemDebugServerProcess>true</ExternalSystemDebugServerProcess>
<ExternalSystemReattachDebugProcess>true</ExternalSystemReattachDebugProcess>
<EXTENSION ID="com.intellij.execution.ExternalSystemRunConfigurationJavaExtension">
<extension name="net.ashald.envfile">
<option name="IS_ENABLED" value="false" />
<option name="IS_SUBST" value="false" />
<option name="IS_PATH_MACRO_SUPPORTED" value="false" />
<option name="IS_IGNORE_MISSING_FILES" value="false" />
<option name="IS_ENABLE_EXPERIMENTAL_INTEGRATIONS" value="false" />
<ENTRIES>
<ENTRY IS_ENABLED="true" PARSER="runconfig" IS_EXECUTABLE="false" />
</ENTRIES>
</extension>
</EXTENSION>
<DebugAllEnabled>false</DebugAllEnabled>
<RunAsTest>false</RunAsTest>
<method v="2" />
</configuration>
</component>

View File

@ -0,0 +1,23 @@
package com.kt.distribution;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.openfeign.EnableFeignClients;
import org.springframework.kafka.annotation.EnableKafka;
/**
* Distribution Service Application
* 다중 채널 배포 관리 서비스
*
* @author System Architect
* @since 2025-10-23
*/
@SpringBootApplication
@EnableKafka
@EnableFeignClients
public class DistributionApplication {
public static void main(String[] args) {
SpringApplication.run(DistributionApplication.class, args);
}
}

View File

@ -0,0 +1,86 @@
package com.kt.distribution.adapter;
import com.kt.distribution.dto.ChannelDistributionResult;
import com.kt.distribution.dto.DistributionRequest;
import io.github.resilience4j.bulkhead.annotation.Bulkhead;
import io.github.resilience4j.circuitbreaker.annotation.CircuitBreaker;
import io.github.resilience4j.retry.annotation.Retry;
import lombok.extern.slf4j.Slf4j;
/**
* Abstract Channel Adapter
* 공통 로직 Resilience4j 적용
*
* @author System Architect
* @since 2025-10-23
*/
@Slf4j
public abstract class AbstractChannelAdapter implements ChannelAdapter {
/**
* 채널로 배포 실행 (Resilience4j 적용)
*
* @param request DistributionRequest
* @return ChannelDistributionResult
*/
@Override
@CircuitBreaker(name = "channelApi", fallbackMethod = "fallback")
@Retry(name = "channelApi")
@Bulkhead(name = "channelApi")
public ChannelDistributionResult distribute(DistributionRequest request) {
long startTime = System.currentTimeMillis();
try {
log.info("Starting distribution to channel: {}, eventId: {}",
getChannelType(), request.getEventId());
// 실제 외부 API 호출 (구현체에서 구현)
ChannelDistributionResult result = executeDistribution(request);
result.setExecutionTimeMs(System.currentTimeMillis() - startTime);
log.info("Distribution completed successfully: channel={}, eventId={}, executionTime={}ms",
getChannelType(), request.getEventId(), result.getExecutionTimeMs());
return result;
} catch (Exception e) {
long executionTime = System.currentTimeMillis() - startTime;
log.error("Distribution failed: channel={}, eventId={}, error={}",
getChannelType(), request.getEventId(), e.getMessage(), e);
return ChannelDistributionResult.builder()
.channel(getChannelType())
.success(false)
.errorMessage(e.getMessage())
.executionTimeMs(executionTime)
.build();
}
}
/**
* 실제 외부 API 호출 로직 (구현체에서 구현)
*
* @param request DistributionRequest
* @return ChannelDistributionResult
*/
protected abstract ChannelDistributionResult executeDistribution(DistributionRequest request);
/**
* Fallback 메서드 (Circuit Breaker Open )
*
* @param request DistributionRequest
* @param throwable Throwable
* @return ChannelDistributionResult
*/
protected ChannelDistributionResult fallback(DistributionRequest request, Throwable throwable) {
log.warn("Fallback triggered for channel: {}, eventId: {}, reason: {}",
getChannelType(), request.getEventId(), throwable.getMessage());
return ChannelDistributionResult.builder()
.channel(getChannelType())
.success(false)
.errorMessage("Circuit Breaker Open: " + throwable.getMessage())
.executionTimeMs(0)
.build();
}
}

View File

@ -0,0 +1,30 @@
package com.kt.distribution.adapter;
import com.kt.distribution.dto.ChannelDistributionResult;
import com.kt.distribution.dto.ChannelType;
import com.kt.distribution.dto.DistributionRequest;
/**
* Channel Adapter Interface
* 채널별 배포 API를 호출하는 인터페이스
*
* @author System Architect
* @since 2025-10-23
*/
public interface ChannelAdapter {
/**
* 지원하는 채널 타입
*
* @return ChannelType
*/
ChannelType getChannelType();
/**
* 채널로 배포 실행
*
* @param request DistributionRequest
* @return ChannelDistributionResult
*/
ChannelDistributionResult distribute(DistributionRequest request);
}

View File

@ -0,0 +1,45 @@
package com.kt.distribution.adapter;
import com.kt.distribution.dto.ChannelDistributionResult;
import com.kt.distribution.dto.ChannelType;
import com.kt.distribution.dto.DistributionRequest;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
import java.util.UUID;
/**
* 지니TV Adapter
* 지니TV 광고 등록 API 호출
*
* @author System Architect
* @since 2025-10-23
*/
@Slf4j
@Component
public class GiniTvAdapter extends AbstractChannelAdapter {
@Value("${channel.apis.ginitv.url}")
private String apiUrl;
@Override
public ChannelType getChannelType() {
return ChannelType.GINITV;
}
@Override
protected ChannelDistributionResult executeDistribution(DistributionRequest request) {
log.debug("Calling GiniTV API: url={}, eventId={}", apiUrl, request.getEventId());
// TODO: 실제 API 호출 (현재는 Mock)
String distributionId = "GTIV-" + UUID.randomUUID().toString();
return ChannelDistributionResult.builder()
.channel(ChannelType.GINITV)
.success(true)
.distributionId(distributionId)
.estimatedReach(10000) // TV 광고 노출
.build();
}
}

View File

@ -0,0 +1,45 @@
package com.kt.distribution.adapter;
import com.kt.distribution.dto.ChannelDistributionResult;
import com.kt.distribution.dto.ChannelType;
import com.kt.distribution.dto.DistributionRequest;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
import java.util.UUID;
/**
* Instagram Adapter
* Instagram 포스팅 API 호출
*
* @author System Architect
* @since 2025-10-23
*/
@Slf4j
@Component
public class InstagramAdapter extends AbstractChannelAdapter {
@Value("${channel.apis.instagram.url}")
private String apiUrl;
@Override
public ChannelType getChannelType() {
return ChannelType.INSTAGRAM;
}
@Override
protected ChannelDistributionResult executeDistribution(DistributionRequest request) {
log.debug("Calling Instagram API: url={}, eventId={}", apiUrl, request.getEventId());
// TODO: 실제 API 호출 (현재는 Mock)
String distributionId = "INSTA-" + UUID.randomUUID().toString();
return ChannelDistributionResult.builder()
.channel(ChannelType.INSTAGRAM)
.success(true)
.distributionId(distributionId)
.estimatedReach(3000) // 팔로워 기반 예상 노출
.build();
}
}

View File

@ -0,0 +1,45 @@
package com.kt.distribution.adapter;
import com.kt.distribution.dto.ChannelDistributionResult;
import com.kt.distribution.dto.ChannelType;
import com.kt.distribution.dto.DistributionRequest;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
import java.util.UUID;
/**
* Kakao Channel Adapter
* Kakao Channel 포스팅 API 호출
*
* @author System Architect
* @since 2025-10-23
*/
@Slf4j
@Component
public class KakaoAdapter extends AbstractChannelAdapter {
@Value("${channel.apis.kakao.url}")
private String apiUrl;
@Override
public ChannelType getChannelType() {
return ChannelType.KAKAO;
}
@Override
protected ChannelDistributionResult executeDistribution(DistributionRequest request) {
log.debug("Calling Kakao API: url={}, eventId={}", apiUrl, request.getEventId());
// TODO: 실제 API 호출 (현재는 Mock)
String distributionId = "KAKAO-" + UUID.randomUUID().toString();
return ChannelDistributionResult.builder()
.channel(ChannelType.KAKAO)
.success(true)
.distributionId(distributionId)
.estimatedReach(4000) // 채널 친구 기반
.build();
}
}

View File

@ -0,0 +1,45 @@
package com.kt.distribution.adapter;
import com.kt.distribution.dto.ChannelDistributionResult;
import com.kt.distribution.dto.ChannelType;
import com.kt.distribution.dto.DistributionRequest;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
import java.util.UUID;
/**
* Naver Blog Adapter
* Naver Blog 포스팅 API 호출
*
* @author System Architect
* @since 2025-10-23
*/
@Slf4j
@Component
public class NaverAdapter extends AbstractChannelAdapter {
@Value("${channel.apis.naver.url}")
private String apiUrl;
@Override
public ChannelType getChannelType() {
return ChannelType.NAVER;
}
@Override
protected ChannelDistributionResult executeDistribution(DistributionRequest request) {
log.debug("Calling Naver API: url={}, eventId={}", apiUrl, request.getEventId());
// TODO: 실제 API 호출 (현재는 Mock)
String distributionId = "NAVER-" + UUID.randomUUID().toString();
return ChannelDistributionResult.builder()
.channel(ChannelType.NAVER)
.success(true)
.distributionId(distributionId)
.estimatedReach(2000) // 블로그 방문자 기반
.build();
}
}

View File

@ -0,0 +1,45 @@
package com.kt.distribution.adapter;
import com.kt.distribution.dto.ChannelDistributionResult;
import com.kt.distribution.dto.ChannelType;
import com.kt.distribution.dto.DistributionRequest;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
import java.util.UUID;
/**
* 링고비즈 Adapter
* 링고비즈 연결음 업데이트 API 호출
*
* @author System Architect
* @since 2025-10-23
*/
@Slf4j
@Component
public class RingoBizAdapter extends AbstractChannelAdapter {
@Value("${channel.apis.ringobiz.url}")
private String apiUrl;
@Override
public ChannelType getChannelType() {
return ChannelType.RINGOBIZ;
}
@Override
protected ChannelDistributionResult executeDistribution(DistributionRequest request) {
log.debug("Calling RingoBiz API: url={}, eventId={}", apiUrl, request.getEventId());
// TODO: 실제 API 호출 (현재는 Mock)
String distributionId = "RBIZ-" + UUID.randomUUID().toString();
return ChannelDistributionResult.builder()
.channel(ChannelType.RINGOBIZ)
.success(true)
.distributionId(distributionId)
.estimatedReach(1000) // 연결음 사용자
.build();
}
}

View File

@ -0,0 +1,72 @@
package com.kt.distribution.adapter;
import com.kt.distribution.dto.ChannelDistributionResult;
import com.kt.distribution.dto.ChannelType;
import com.kt.distribution.dto.DistributionRequest;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
import org.springframework.web.client.RestTemplate;
import java.util.HashMap;
import java.util.Map;
import java.util.UUID;
/**
* 우리동네TV Adapter
* 우리동네TV API 호출
*
* @author System Architect
* @since 2025-10-23
*/
@Slf4j
@Component
public class UriDongNeTvAdapter extends AbstractChannelAdapter {
@Value("${channel.apis.uridongnetv.url}")
private String apiUrl;
private final RestTemplate restTemplate = new RestTemplate();
@Override
public ChannelType getChannelType() {
return ChannelType.URIDONGNETV;
}
@Override
protected ChannelDistributionResult executeDistribution(DistributionRequest request) {
log.debug("Calling UriDongNeTV API: url={}, eventId={}", apiUrl, request.getEventId());
// 외부 API 호출 준비
Map<String, Object> payload = new HashMap<>();
payload.put("eventId", request.getEventId());
payload.put("title", request.getTitle());
payload.put("videoUrl", request.getImageUrl()); // 이미지를 영상으로 변환한 URL
payload.put("radius", getChannelSetting(request, "radius", "500m"));
payload.put("timeSlot", getChannelSetting(request, "timeSlot", "evening"));
// TODO: 실제 API 호출 (현재는 Mock)
// ResponseEntity<Map> response = restTemplate.postForEntity(apiUrl + "/distribute", payload, Map.class);
// Mock 응답
String distributionId = "UDTV-" + UUID.randomUUID().toString();
int estimatedReach = 5000;
return ChannelDistributionResult.builder()
.channel(ChannelType.URIDONGNETV)
.success(true)
.distributionId(distributionId)
.estimatedReach(estimatedReach)
.build();
}
private String getChannelSetting(DistributionRequest request, String key, String defaultValue) {
if (request.getChannelSettings() != null) {
Map<String, Object> settings = request.getChannelSettings().get(ChannelType.URIDONGNETV.name());
if (settings != null && settings.containsKey(key)) {
return settings.get(key).toString();
}
}
return defaultValue;
}
}

View File

@ -0,0 +1,46 @@
package com.kt.distribution.config;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringSerializer;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.support.serializer.JsonSerializer;
import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
/**
* Kafka Configuration
* Kafka Producer 설정
*
* @author System Architect
* @since 2025-10-23
*/
@Configuration
@ConditionalOnProperty(name = "spring.kafka.enabled", havingValue = "true", matchIfMissing = true)
public class KafkaConfig {
@Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
@Bean
public ProducerFactory<String, Object> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
configProps.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
return new DefaultKafkaProducerFactory<>(configProps);
}
@Bean
public KafkaTemplate<String, Object> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}

View File

@ -0,0 +1,32 @@
package com.kt.distribution.config;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.CorsRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;
/**
* Web Configuration
* CORS 설정 기타 관련 설정
*
* @author System Architect
* @since 2025-10-24
*/
@Configuration
public class WebConfig implements WebMvcConfigurer {
/**
* CORS 설정
* - 모든 origin 허용 (개발 환경)
* - 모든 HTTP 메서드 허용
* - Credentials 허용
*/
@Override
public void addCorsMappings(CorsRegistry registry) {
registry.addMapping("/**")
.allowedOriginPatterns("*")
.allowedMethods("GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS")
.allowedHeaders("*")
.allowCredentials(true)
.maxAge(3600);
}
}

View File

@ -0,0 +1,71 @@
package com.kt.distribution.controller;
import com.kt.distribution.dto.DistributionRequest;
import com.kt.distribution.dto.DistributionResponse;
import com.kt.distribution.dto.DistributionStatusResponse;
import com.kt.distribution.service.DistributionService;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;
/**
* Distribution Controller
* POST /api/distribution/distribute - 다중 채널 배포 실행
* GET /api/distribution/{eventId}/status - 배포 상태 조회
*
* @author System Architect
* @since 2025-10-23
*/
@Slf4j
@RestController
@RequestMapping("/api/distribution")
@RequiredArgsConstructor
public class DistributionController {
private final DistributionService distributionService;
/**
* 다중 채널 배포 실행
* UFR-DIST-010: 다중채널배포
*
* @param request DistributionRequest
* @return DistributionResponse
*/
@PostMapping("/distribute")
public ResponseEntity<DistributionResponse> distribute(@RequestBody DistributionRequest request) {
log.info("Received distribution request: eventId={}, channels={}",
request.getEventId(), request.getChannels());
DistributionResponse response = distributionService.distribute(request);
log.info("Distribution request processed: eventId={}, success={}, successCount={}, failureCount={}",
response.getEventId(), response.isSuccess(),
response.getSuccessCount(), response.getFailureCount());
return ResponseEntity.ok(response);
}
/**
* 배포 상태 조회
* UFR-DIST-020: 배포상태조회
*
* @param eventId 이벤트 ID
* @return DistributionStatusResponse
*/
@GetMapping("/{eventId}/status")
public ResponseEntity<DistributionStatusResponse> getDistributionStatus(@PathVariable String eventId) {
log.info("Received distribution status request: eventId={}", eventId);
DistributionStatusResponse response = distributionService.getDistributionStatus(eventId);
log.info("Distribution status retrieved: eventId={}, overallStatus={}",
eventId, response.getOverallStatus());
if ("NOT_FOUND".equals(response.getOverallStatus())) {
return ResponseEntity.notFound().build();
}
return ResponseEntity.ok(response);
}
}

View File

@ -0,0 +1,49 @@
package com.kt.distribution.dto;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* 채널별 배포 결과
*
* @author System Architect
* @since 2025-10-23
*/
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class ChannelDistributionResult {
/**
* 채널 타입
*/
private ChannelType channel;
/**
* 배포 성공 여부
*/
private boolean success;
/**
* 배포 ID (성공 )
*/
private String distributionId;
/**
* 예상 노출 (성공 )
*/
private Integer estimatedReach;
/**
* 에러 메시지 (실패 )
*/
private String errorMessage;
/**
* 배포 소요 시간 (ms)
*/
private long executionTimeMs;
}

View File

@ -0,0 +1,100 @@
package com.kt.distribution.dto;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.time.LocalDateTime;
import java.util.List;
/**
* 채널별 배포 상태 DTO
*
* 채널의 배포 진행 상태 결과 정보를 담습니다.
*/
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class ChannelStatus {
/**
* 채널 타입
*/
private ChannelType channel;
/**
* 채널별 배포 상태
* - PENDING: 대기
* - IN_PROGRESS: 진행
* - COMPLETED: 완료
* - FAILED: 실패
*/
private String status;
/**
* 진행률 (0-100, IN_PROGRESS 상태일 사용)
*/
private Integer progress;
/**
* 채널별 배포 ID (우리동네TV, 지니TV )
*/
private String distributionId;
/**
* 예상 노출 (우리동네TV, 지니TV)
*/
private Integer estimatedViews;
/**
* 업데이트 완료 시각 (링고비즈)
*/
private LocalDateTime updateTimestamp;
/**
* 광고 ID (지니TV)
*/
private String adId;
/**
* 노출 스케줄 (지니TV)
*/
private List<String> impressionSchedule;
/**
* 게시물 URL (Instagram, Naver Blog)
*/
private String postUrl;
/**
* 게시물 ID (Instagram)
*/
private String postId;
/**
* 메시지 ID (Kakao Channel)
*/
private String messageId;
/**
* 완료 시각
*/
private LocalDateTime completedAt;
/**
* 오류 메시지 (실패 )
*/
private String errorMessage;
/**
* 재시도 횟수
*/
private Integer retries;
/**
* 마지막 재시도 시각
*/
private LocalDateTime lastRetryAt;
}

View File

@ -0,0 +1,26 @@
package com.kt.distribution.dto;
/**
* 배포 채널 타입
*
* @author System Architect
* @since 2025-10-23
*/
public enum ChannelType {
URIDONGNETV("우리동네TV"),
RINGOBIZ("링고비즈"),
GINITV("지니TV"),
INSTAGRAM("Instagram"),
NAVER("Naver Blog"),
KAKAO("Kakao Channel");
private final String displayName;
ChannelType(String displayName) {
this.displayName = displayName;
}
public String getDisplayName() {
return displayName;
}
}

View File

@ -0,0 +1,54 @@
package com.kt.distribution.dto;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
import java.util.Map;
/**
* 배포 요청 DTO
* POST /api/distribution/distribute
*
* @author System Architect
* @since 2025-10-23
*/
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class DistributionRequest {
/**
* 이벤트 ID
*/
private String eventId;
/**
* 이벤트 제목
*/
private String title;
/**
* 이벤트 설명
*/
private String description;
/**
* 이미지 URL (CDN)
*/
private String imageUrl;
/**
* 배포할 채널 목록
*/
private List<ChannelType> channels;
/**
* 채널별 추가 설정 (Optional)
* : { "URIDONGNETV": { "radius": "1km", "timeSlot": "evening" } }
*/
private Map<String, Map<String, Object>> channelSettings;
}

View File

@ -0,0 +1,63 @@
package com.kt.distribution.dto;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.time.LocalDateTime;
import java.util.List;
/**
* 배포 응답 DTO
* POST /api/distribution/distribute
*
* @author System Architect
* @since 2025-10-23
*/
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class DistributionResponse {
/**
* 이벤트 ID
*/
private String eventId;
/**
* 배포 성공 여부 (모든 채널 또는 일부 채널 성공)
*/
private boolean success;
/**
* 채널별 배포 결과
*/
private List<ChannelDistributionResult> channelResults;
/**
* 성공한 채널
*/
private int successCount;
/**
* 실패한 채널
*/
private int failureCount;
/**
* 배포 완료 시각
*/
private LocalDateTime completedAt;
/**
* 전체 배포 소요 시간 (ms)
*/
private long totalExecutionTimeMs;
/**
* 메시지
*/
private String message;
}

View File

@ -0,0 +1,52 @@
package com.kt.distribution.dto;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.time.LocalDateTime;
import java.util.List;
/**
* 배포 상태 조회 응답 DTO
*
* 특정 이벤트의 전체 배포 상태 채널별 상세 상태 정보를 담습니다.
*/
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class DistributionStatusResponse {
/**
* 이벤트 ID
*/
private String eventId;
/**
* 전체 배포 상태
* - PENDING: 대기
* - IN_PROGRESS: 진행
* - COMPLETED: 완료
* - PARTIAL_FAILURE: 부분 성공
* - FAILED: 실패
* - NOT_FOUND: 배포 이력 없음
*/
private String overallStatus;
/**
* 배포 시작 시각
*/
private LocalDateTime startedAt;
/**
* 배포 완료 시각
*/
private LocalDateTime completedAt;
/**
* 채널별 배포 상태 목록
*/
private List<ChannelStatus> channels;
}

View File

@ -0,0 +1,48 @@
package com.kt.distribution.event;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.time.LocalDateTime;
import java.util.List;
/**
* Distribution Completed Event
* 배포 완료 Kafka로 발행하는 이벤트
*
* @author System Architect
* @since 2025-10-23
*/
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class DistributionCompletedEvent {
/**
* 이벤트 ID
*/
private String eventId;
/**
* 배포 완료된 채널 목록
*/
private List<String> distributedChannels;
/**
* 배포 완료 시각
*/
private LocalDateTime completedAt;
/**
* 성공한 채널
*/
private int successCount;
/**
* 실패한 채널
*/
private int failureCount;
}

View File

@ -0,0 +1,65 @@
package com.kt.distribution.repository;
import com.kt.distribution.dto.DistributionStatusResponse;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Repository;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.ConcurrentHashMap;
/**
* 배포 상태 저장소
*
* 메모리 기반으로 배포 상태를 관리합니다.
* 실제 운영 환경에서는 Redis 또는 데이터베이스를 사용하여 영구 저장하는 것을 권장합니다.
*/
@Slf4j
@Repository
public class DistributionStatusRepository {
/**
* 이벤트 ID를 키로 배포 상태를 저장하는 메모리 저장소
*/
private final Map<String, DistributionStatusResponse> distributionStatuses = new ConcurrentHashMap<>();
/**
* 배포 상태 저장
*
* @param eventId 이벤트 ID
* @param status 배포 상태
*/
public void save(String eventId, DistributionStatusResponse status) {
log.debug("Saving distribution status: eventId={}, overallStatus={}", eventId, status.getOverallStatus());
distributionStatuses.put(eventId, status);
}
/**
* 배포 상태 조회
*
* @param eventId 이벤트 ID
* @return 배포 상태 (없으면 Optional.empty())
*/
public Optional<DistributionStatusResponse> findByEventId(String eventId) {
log.debug("Finding distribution status: eventId={}", eventId);
return Optional.ofNullable(distributionStatuses.get(eventId));
}
/**
* 배포 상태 삭제
*
* @param eventId 이벤트 ID
*/
public void delete(String eventId) {
log.debug("Deleting distribution status: eventId={}", eventId);
distributionStatuses.remove(eventId);
}
/**
* 모든 배포 상태 삭제 (테스트용)
*/
public void deleteAll() {
log.debug("Deleting all distribution statuses");
distributionStatuses.clear();
}
}

View File

@ -0,0 +1,261 @@
package com.kt.distribution.service;
import com.kt.distribution.adapter.ChannelAdapter;
import com.kt.distribution.dto.*;
import com.kt.distribution.event.DistributionCompletedEvent;
import com.kt.distribution.repository.DistributionStatusRepository;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.time.LocalDateTime;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.stream.Collectors;
/**
* Distribution Service
* 다중 채널 병렬 배포 Kafka Event 발행
*
* @author System Architect
* @since 2025-10-23
*/
@Slf4j
@Service
public class DistributionService {
private final List<ChannelAdapter> channelAdapters;
private final Optional<KafkaEventPublisher> kafkaEventPublisher;
private final DistributionStatusRepository statusRepository;
@Autowired
public DistributionService(List<ChannelAdapter> channelAdapters,
Optional<KafkaEventPublisher> kafkaEventPublisher,
DistributionStatusRepository statusRepository) {
this.channelAdapters = channelAdapters;
this.kafkaEventPublisher = kafkaEventPublisher;
this.statusRepository = statusRepository;
}
// 병렬 실행을 위한 ExecutorService (채널별 스레드 )
private final ExecutorService executorService = Executors.newFixedThreadPool(10);
/**
* 다중 채널 병렬 배포
*
* @param request DistributionRequest
* @return DistributionResponse
*/
public DistributionResponse distribute(DistributionRequest request) {
LocalDateTime startedAt = LocalDateTime.now();
long startTime = System.currentTimeMillis();
log.info("Starting multi-channel distribution: eventId={}, channels={}",
request.getEventId(), request.getChannels());
// 배포 시작 상태 저장 (IN_PROGRESS)
saveInProgressStatus(request.getEventId(), request.getChannels(), startedAt);
// 채널 어댑터 매핑 (타입별)
Map<String, ChannelAdapter> adapterMap = channelAdapters.stream()
.collect(Collectors.toMap(
adapter -> adapter.getChannelType().name(),
adapter -> adapter
));
// 병렬 배포 실행
List<CompletableFuture<ChannelDistributionResult>> futures = request.getChannels().stream()
.map(channelType -> {
ChannelAdapter adapter = adapterMap.get(channelType.name());
if (adapter == null) {
log.warn("No adapter found for channel: {}", channelType);
return CompletableFuture.completedFuture(
ChannelDistributionResult.builder()
.channel(channelType)
.success(false)
.errorMessage("Adapter not found")
.build()
);
}
// 비동기 실행
return CompletableFuture.supplyAsync(
() -> adapter.distribute(request),
executorService
);
})
.collect(Collectors.toList());
// 모든 배포 완료 대기
CompletableFuture<Void> allOf = CompletableFuture.allOf(
futures.toArray(new CompletableFuture[0])
);
allOf.join(); // 블로킹 대기 (최대 1분 목표)
// 결과 수집
List<ChannelDistributionResult> results = futures.stream()
.map(CompletableFuture::join)
.collect(Collectors.toList());
long totalExecutionTime = System.currentTimeMillis() - startTime;
LocalDateTime completedAt = LocalDateTime.now();
// 성공/실패 카운트
long successCount = results.stream().filter(ChannelDistributionResult::isSuccess).count();
long failureCount = results.size() - successCount;
log.info("Multi-channel distribution completed: eventId={}, successCount={}, failureCount={}, totalTime={}ms",
request.getEventId(), successCount, failureCount, totalExecutionTime);
// 배포 완료 상태 저장 (COMPLETED/PARTIAL_FAILURE/FAILED)
saveCompletedStatus(request.getEventId(), results, startedAt, completedAt, successCount, failureCount);
// Kafka Event 발행
publishDistributionCompletedEvent(request.getEventId(), results);
// 응답 생성
return DistributionResponse.builder()
.eventId(request.getEventId())
.success(successCount > 0) // 1개 이상 성공하면 성공으로 간주
.channelResults(results)
.successCount((int) successCount)
.failureCount((int) failureCount)
.completedAt(completedAt)
.totalExecutionTimeMs(totalExecutionTime)
.message(String.format("Distribution completed: %d succeeded, %d failed",
successCount, failureCount))
.build();
}
/**
* 배포 상태 조회
*
* @param eventId 이벤트 ID
* @return 배포 상태
*/
public DistributionStatusResponse getDistributionStatus(String eventId) {
return statusRepository.findByEventId(eventId)
.orElse(DistributionStatusResponse.builder()
.eventId(eventId)
.overallStatus("NOT_FOUND")
.channels(List.of())
.build());
}
/**
* 배포 시작 상태 저장 (IN_PROGRESS)
*
* @param eventId 이벤트 ID
* @param channels 배포 채널 목록
* @param startedAt 시작 시각
*/
private void saveInProgressStatus(String eventId, List<ChannelType> channels, LocalDateTime startedAt) {
List<ChannelStatus> channelStatuses = channels.stream()
.map(channelType -> ChannelStatus.builder()
.channel(channelType)
.status("PENDING")
.build())
.collect(Collectors.toList());
DistributionStatusResponse status = DistributionStatusResponse.builder()
.eventId(eventId)
.overallStatus("IN_PROGRESS")
.startedAt(startedAt)
.channels(channelStatuses)
.build();
statusRepository.save(eventId, status);
}
/**
* 배포 완료 상태 저장
*
* @param eventId 이벤트 ID
* @param results 배포 결과
* @param startedAt 시작 시각
* @param completedAt 완료 시각
* @param successCount 성공 개수
* @param failureCount 실패 개수
*/
private void saveCompletedStatus(String eventId, List<ChannelDistributionResult> results,
LocalDateTime startedAt, LocalDateTime completedAt,
long successCount, long failureCount) {
// 전체 상태 결정
String overallStatus;
if (successCount == 0) {
overallStatus = "FAILED";
} else if (failureCount == 0) {
overallStatus = "COMPLETED";
} else {
overallStatus = "PARTIAL_FAILURE";
}
// ChannelDistributionResult ChannelStatus 변환
List<ChannelStatus> channelStatuses = results.stream()
.map(this::convertToChannelStatus)
.collect(Collectors.toList());
DistributionStatusResponse status = DistributionStatusResponse.builder()
.eventId(eventId)
.overallStatus(overallStatus)
.startedAt(startedAt)
.completedAt(completedAt)
.channels(channelStatuses)
.build();
statusRepository.save(eventId, status);
}
/**
* ChannelDistributionResult를 ChannelStatus로 변환
*
* @param result 배포 결과
* @return 채널 상태
*/
private ChannelStatus convertToChannelStatus(ChannelDistributionResult result) {
return ChannelStatus.builder()
.channel(result.getChannel())
.status(result.isSuccess() ? "COMPLETED" : "FAILED")
.distributionId(result.getDistributionId())
.estimatedViews(result.getEstimatedReach())
.completedAt(LocalDateTime.now())
.errorMessage(result.getErrorMessage())
.build();
}
/**
* DistributionCompleted 이벤트 발행
*
* @param eventId 이벤트 ID
* @param results 채널별 배포 결과
*/
private void publishDistributionCompletedEvent(String eventId, List<ChannelDistributionResult> results) {
if (kafkaEventPublisher.isEmpty()) {
log.warn("KafkaEventPublisher not available - skipping event publishing");
return;
}
List<String> distributedChannels = results.stream()
.filter(ChannelDistributionResult::isSuccess)
.map(result -> result.getChannel().name())
.collect(Collectors.toList());
long successCount = results.stream().filter(ChannelDistributionResult::isSuccess).count();
long failureCount = results.size() - successCount;
DistributionCompletedEvent event = DistributionCompletedEvent.builder()
.eventId(eventId)
.distributedChannels(distributedChannels)
.completedAt(LocalDateTime.now())
.successCount((int) successCount)
.failureCount((int) failureCount)
.build();
kafkaEventPublisher.get().publishDistributionCompleted(event);
}
}

View File

@ -0,0 +1,62 @@
package com.kt.distribution.service;
import com.kt.distribution.event.DistributionCompletedEvent;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.stereotype.Service;
import java.util.concurrent.CompletableFuture;
/**
* Kafka Event Publisher
* DistributionCompleted 이벤트를 Kafka로 발행
*
* @author System Architect
* @since 2025-10-23
*/
@Slf4j
@Service
@ConditionalOnProperty(name = "spring.kafka.enabled", havingValue = "true", matchIfMissing = true)
@RequiredArgsConstructor
public class KafkaEventPublisher {
private final KafkaTemplate<String, Object> kafkaTemplate;
@Value("${kafka.topics.distribution-completed}")
private String distributionCompletedTopic;
/**
* 배포 완료 이벤트 발행
*
* @param event DistributionCompletedEvent
*/
public void publishDistributionCompleted(DistributionCompletedEvent event) {
try {
log.info("Publishing DistributionCompletedEvent: eventId={}, successCount={}, failureCount={}",
event.getEventId(), event.getSuccessCount(), event.getFailureCount());
CompletableFuture<SendResult<String, Object>> future =
kafkaTemplate.send(distributionCompletedTopic, event.getEventId(), event);
future.whenComplete((result, ex) -> {
if (ex == null) {
log.info("DistributionCompletedEvent published successfully: topic={}, partition={}, offset={}",
distributionCompletedTopic,
result.getRecordMetadata().partition(),
result.getRecordMetadata().offset());
} else {
log.error("Failed to publish DistributionCompletedEvent: eventId={}, error={}",
event.getEventId(), ex.getMessage(), ex);
}
});
} catch (Exception e) {
log.error("Error publishing DistributionCompletedEvent: eventId={}, error={}",
event.getEventId(), e.getMessage(), e);
}
}
}

View File

@ -0,0 +1,102 @@
server:
port: 8085
spring:
application:
name: distribution-service
# Disable auto-configuration (No database required)
autoconfigure:
exclude:
- org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
- org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration
- org.springframework.boot.autoconfigure.data.redis.RedisAutoConfiguration
- org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration
- org.springframework.boot.actuate.autoconfigure.security.servlet.ManagementWebSecurityAutoConfiguration
kafka:
enabled: ${KAFKA_ENABLED:true}
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS:4.230.50.63:9092}
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
spring.json.type.mapping: distributionCompleted:com.kt.distribution.event.DistributionCompletedEvent
consumer:
group-id: ${KAFKA_CONSUMER_GROUP:distribution-service}
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties:
spring.json.trusted.packages: '*'
# Kafka Topics
kafka:
topics:
distribution-completed: distribution-completed
# Resilience4j Configuration
resilience4j:
circuitbreaker:
instances:
channelApi:
failure-rate-threshold: 50
slow-call-rate-threshold: 50
slow-call-duration-threshold: 5000ms
wait-duration-in-open-state: 30s
permitted-number-of-calls-in-half-open-state: 3
sliding-window-type: COUNT_BASED
sliding-window-size: 10
minimum-number-of-calls: 5
retry:
instances:
channelApi:
max-attempts: 3
wait-duration: 1s
exponential-backoff-multiplier: 2
retry-exceptions:
- java.net.SocketTimeoutException
- java.net.ConnectException
- org.springframework.web.client.ResourceAccessException
bulkhead:
instances:
channelApi:
max-concurrent-calls: 10
max-wait-duration: 0ms
# External Channel APIs (Mock URLs)
channel:
apis:
uridongnetv:
url: ${URIDONGNETV_API_URL:http://localhost:9001/api/uridongnetv}
timeout: 10000
ringobiz:
url: ${RINGOBIZ_API_URL:http://localhost:9002/api/ringobiz}
timeout: 10000
ginitv:
url: ${GINITV_API_URL:http://localhost:9003/api/ginitv}
timeout: 10000
instagram:
url: ${INSTAGRAM_API_URL:http://localhost:9004/api/instagram}
timeout: 10000
naver:
url: ${NAVER_API_URL:http://localhost:9005/api/naver}
timeout: 10000
kakao:
url: ${KAKAO_API_URL:http://localhost:9006/api/kakao}
timeout: 10000
# Logging
logging:
file:
name: ${LOG_FILE:logs/distribution-service.log}
logback:
rollingpolicy:
max-file-size: 10MB
max-history: 7
total-size-cap: 100MB
level:
com.kt.distribution: DEBUG
org.springframework.kafka: INFO
io.github.resilience4j: DEBUG

View File

@ -0,0 +1,134 @@
[
{
"eventId": "evt-test-001",
"title": "봄맞이 삼겹살 50% 할인 이벤트",
"description": "3월 한정 특별 이벤트! 삼겹살 1인분 무료 증정",
"imageUrl": "https://cdn.example.com/event-image-001.jpg",
"channels": ["URIDONGNETV", "INSTAGRAM", "KAKAO", "NAVER"],
"channelSettings": {
"URIDONGNETV": {
"radius": "1km",
"timeSlot": "evening"
}
}
},
{
"eventId": "evt-test-002",
"title": "신규 고객 환영! 치킨 3,000원 할인",
"description": "처음 방문하시는 고객님께 특별 할인 쿠폰 제공. 기간 내 사용 가능",
"imageUrl": "https://cdn.example.com/event-image-002.jpg",
"channels": ["INSTAGRAM", "KAKAO", "NAVER"],
"channelSettings": {
"INSTAGRAM": {
"hashtags": ["치킨", "할인", "신규고객"]
}
}
},
{
"eventId": "evt-test-003",
"title": "주말 특가! 피자 1+1 이벤트",
"description": "토요일, 일요일 한정! 모든 피자 1+1 행사. 배달 주문 가능",
"imageUrl": "https://cdn.example.com/event-image-003.jpg",
"channels": ["URIDONGNETV", "KAKAO"],
"channelSettings": {
"URIDONGNETV": {
"radius": "2km",
"timeSlot": "lunch"
},
"KAKAO": {
"targetAge": "20-40",
"sendTime": "11:00"
}
}
},
{
"eventId": "evt-test-004",
"title": "여름 시즌 냉면 페스티벌",
"description": "시원한 냉면과 함께하는 여름! 전 메뉴 20% 할인",
"imageUrl": "https://cdn.example.com/event-image-004.jpg",
"channels": ["URIDONGNETV", "INSTAGRAM", "NAVER"],
"channelSettings": {
"URIDONGNETV": {
"radius": "500m",
"timeSlot": "afternoon"
}
}
},
{
"eventId": "evt-test-005",
"title": "리뷰 작성 시 음료 무료!",
"description": "네이버 리뷰 작성하고 아메리카노 1잔 무료로 받아가세요",
"imageUrl": "https://cdn.example.com/event-image-005.jpg",
"channels": ["NAVER", "INSTAGRAM"],
"channelSettings": {
"NAVER": {
"reviewRequired": true
}
}
},
{
"eventId": "evt-test-006",
"title": "생일 축하! 케이크 30% 할인",
"description": "생일 당일 방문 시 신분증 제시하면 케이크 30% 할인. 예약 필수",
"imageUrl": "https://cdn.example.com/event-image-006.jpg",
"channels": ["KAKAO", "INSTAGRAM"],
"channelSettings": {
"KAKAO": {
"reservationRequired": true,
"targetAge": "all"
}
}
},
{
"eventId": "evt-test-007",
"title": "점심 시간 특가! 런치 세트 8,000원",
"description": "평일 11:30~14:00 런치 세트 메뉴 8,000원. 커피 포함",
"imageUrl": "https://cdn.example.com/event-image-007.jpg",
"channels": ["URIDONGNETV", "NAVER"],
"channelSettings": {
"URIDONGNETV": {
"radius": "1.5km",
"timeSlot": "lunch"
}
}
},
{
"eventId": "evt-test-008",
"title": "가족 나들이 패키지 20% 할인",
"description": "4인 가족 세트 메뉴 20% 할인! 키즈 메뉴 포함",
"imageUrl": "https://cdn.example.com/event-image-008.jpg",
"channels": ["KAKAO", "INSTAGRAM", "NAVER"],
"channelSettings": {
"KAKAO": {
"targetAge": "30-50",
"sendTime": "10:00"
}
}
},
{
"eventId": "evt-test-009",
"title": "야간 할인! 저녁 9시 이후 전 메뉴 15% OFF",
"description": "저녁 9시 이후 방문 시 모든 메뉴 15% 할인. 포장 가능",
"imageUrl": "https://cdn.example.com/event-image-009.jpg",
"channels": ["URIDONGNETV", "INSTAGRAM"],
"channelSettings": {
"URIDONGNETV": {
"radius": "1km",
"timeSlot": "evening"
}
}
},
{
"eventId": "evt-test-010",
"title": "SNS 팔로우 이벤트! 디저트 무료",
"description": "인스타그램 팔로우 후 인증하면 디저트 1개 무료 제공",
"imageUrl": "https://cdn.example.com/event-image-010.jpg",
"channels": ["INSTAGRAM", "KAKAO"],
"channelSettings": {
"INSTAGRAM": {
"followRequired": true,
"hashtags": ["팔로우이벤트", "디저트무료", "맛집"]
}
}
}
]

View File

@ -0,0 +1,34 @@
#!/bin/bash
# Distribution Service API 테스트 스크립트
echo "=== Distribution Service API Test ==="
echo ""
# 1. Health Check (추후 추가 예정)
# echo "1. Health Check..."
# curl -X GET http://localhost:8085/actuator/health
# echo ""
# 2. 다중 채널 배포 테스트
echo "1. Testing Multi-Channel Distribution..."
echo ""
curl -X POST http://localhost:8085/api/distribution/distribute \
-H "Content-Type: application/json" \
-d '{
"eventId": "evt-test-001",
"title": "봄맞이 삼겹살 50% 할인 이벤트",
"description": "3월 한정 특별 이벤트! 삼겹살 1인분 무료 증정",
"imageUrl": "https://cdn.example.com/event-image.jpg",
"channels": ["URIDONGNETV", "INSTAGRAM", "KAKAO", "NAVER"],
"channelSettings": {
"URIDONGNETV": {
"radius": "1km",
"timeSlot": "evening"
}
}
}' | jq '.'
echo ""
echo "=== Test Completed ==="

View File

@ -0,0 +1,303 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Tripgen Service Runner Script
Reads execution profiles from {service-name}/.run/{service-name}.run.xml and runs services accordingly.
Usage:
python run-config.py <service-name>
Examples:
python run-config.py user-service
python run-config.py location-service
python run-config.py trip-service
python run-config.py ai-service
"""
import os
import sys
import subprocess
import xml.etree.ElementTree as ET
from pathlib import Path
import argparse
def get_project_root():
"""Find project root directory"""
current_dir = Path(__file__).parent.absolute()
while current_dir.parent != current_dir:
if (current_dir / 'gradlew').exists() or (current_dir / 'gradlew.bat').exists():
return current_dir
current_dir = current_dir.parent
# If gradlew not found, assume parent directory of develop as project root
return Path(__file__).parent.parent.absolute()
def parse_run_configurations(project_root, service_name=None):
"""Parse run configuration files from .run directories"""
configurations = {}
if service_name:
# Parse specific service configuration
run_config_path = project_root / service_name / '.run' / f'{service_name}.run.xml'
if run_config_path.exists():
config = parse_single_run_config(run_config_path, service_name)
if config:
configurations[service_name] = config
else:
print(f"[ERROR] Cannot find run configuration: {run_config_path}")
else:
# Find all service directories
service_dirs = ['user-service', 'location-service', 'trip-service', 'ai-service']
for service in service_dirs:
run_config_path = project_root / service / '.run' / f'{service}.run.xml'
if run_config_path.exists():
config = parse_single_run_config(run_config_path, service)
if config:
configurations[service] = config
return configurations
def parse_single_run_config(config_path, service_name):
"""Parse a single run configuration file"""
try:
tree = ET.parse(config_path)
root = tree.getroot()
# Find configuration element
config = root.find('.//configuration[@type="GradleRunConfiguration"]')
if config is None:
print(f"[WARNING] No Gradle configuration found in {config_path}")
return None
# Extract environment variables
env_vars = {}
env_option = config.find('.//option[@name="env"]')
if env_option is not None:
env_map = env_option.find('map')
if env_map is not None:
for entry in env_map.findall('entry'):
key = entry.get('key')
value = entry.get('value')
if key and value:
env_vars[key] = value
# Extract task names
task_names = []
task_names_option = config.find('.//option[@name="taskNames"]')
if task_names_option is not None:
task_list = task_names_option.find('list')
if task_list is not None:
for option in task_list.findall('option'):
value = option.get('value')
if value:
task_names.append(value)
if env_vars or task_names:
return {
'env_vars': env_vars,
'task_names': task_names,
'config_path': str(config_path)
}
return None
except ET.ParseError as e:
print(f"[ERROR] XML parsing error in {config_path}: {e}")
return None
except Exception as e:
print(f"[ERROR] Error reading {config_path}: {e}")
return None
def get_gradle_command(project_root):
"""Return appropriate Gradle command for OS"""
if os.name == 'nt': # Windows
gradle_bat = project_root / 'gradlew.bat'
if gradle_bat.exists():
return str(gradle_bat)
return 'gradle.bat'
else: # Unix-like (Linux, macOS)
gradle_sh = project_root / 'gradlew'
if gradle_sh.exists():
return str(gradle_sh)
return 'gradle'
def run_service(service_name, config, project_root):
"""Run service"""
print(f"[START] Starting {service_name} service...")
# Set environment variables
env = os.environ.copy()
for key, value in config['env_vars'].items():
env[key] = value
print(f" [ENV] {key}={value}")
# Prepare Gradle command
gradle_cmd = get_gradle_command(project_root)
# Execute tasks
for task_name in config['task_names']:
print(f"\n[RUN] Executing: {task_name}")
cmd = [gradle_cmd, task_name]
try:
# Execute from project root directory
process = subprocess.Popen(
cmd,
cwd=project_root,
env=env,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
bufsize=1,
encoding='utf-8',
errors='replace'
)
print(f"[CMD] Command: {' '.join(cmd)}")
print(f"[DIR] Working directory: {project_root}")
print("=" * 50)
# Real-time output
for line in process.stdout:
print(line.rstrip())
# Wait for process completion
process.wait()
if process.returncode == 0:
print(f"\n[SUCCESS] {task_name} execution completed")
else:
print(f"\n[FAILED] {task_name} execution failed (exit code: {process.returncode})")
return False
except KeyboardInterrupt:
print(f"\n[STOP] Interrupted by user")
process.terminate()
return False
except Exception as e:
print(f"\n[ERROR] Execution error: {e}")
return False
return True
def list_available_services(configurations):
"""List available services"""
print("[LIST] Available services:")
print("=" * 40)
for service_name, config in configurations.items():
if config['task_names']:
print(f" [SERVICE] {service_name}")
if 'config_path' in config:
print(f" +-- Config: {config['config_path']}")
for task in config['task_names']:
print(f" +-- Task: {task}")
print(f" +-- {len(config['env_vars'])} environment variables")
print()
def main():
"""Main function"""
parser = argparse.ArgumentParser(
description='Tripgen Service Runner Script',
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
python run-config.py user-service
python run-config.py location-service
python run-config.py trip-service
python run-config.py ai-service
python run-config.py --list
"""
)
parser.add_argument(
'service_name',
nargs='?',
help='Service name to run'
)
parser.add_argument(
'--list', '-l',
action='store_true',
help='List available services'
)
args = parser.parse_args()
# Find project root
project_root = get_project_root()
print(f"[INFO] Project root: {project_root}")
# Parse run configurations
print("[INFO] Reading run configuration files...")
configurations = parse_run_configurations(project_root)
if not configurations:
print("[ERROR] No execution configurations found")
return 1
print(f"[INFO] Found {len(configurations)} execution configurations")
# List services request
if args.list:
list_available_services(configurations)
return 0
# If service name not provided
if not args.service_name:
print("\n[ERROR] Please provide service name")
list_available_services(configurations)
print("Usage: python run-config.py <service-name>")
return 1
# Find service
service_name = args.service_name
# Try to parse specific service configuration if not found
if service_name not in configurations:
print(f"[INFO] Trying to find configuration for '{service_name}'...")
configurations = parse_run_configurations(project_root, service_name)
if service_name not in configurations:
print(f"[ERROR] Cannot find '{service_name}' service")
list_available_services(configurations)
return 1
config = configurations[service_name]
if not config['task_names']:
print(f"[ERROR] No executable tasks found for '{service_name}' service")
return 1
# Execute service
print(f"\n[TARGET] Starting '{service_name}' service execution")
print("=" * 50)
success = run_service(service_name, config, project_root)
if success:
print(f"\n[COMPLETE] '{service_name}' service started successfully!")
return 0
else:
print(f"\n[FAILED] Failed to start '{service_name}' service")
return 1
if __name__ == '__main__':
try:
exit_code = main()
sys.exit(exit_code)
except KeyboardInterrupt:
print("\n[STOP] Interrupted by user")
sys.exit(1)
except Exception as e:
print(f"\n[ERROR] Unexpected error occurred: {e}")
sys.exit(1)