基于布谷鸟算法实现率定系数的starter

news/2025/2/11 0:20:48/

  布谷鸟算法(Cuckoo Search, CS)是一种基于群体智能的优化算法,灵感来源于布谷鸟的繁殖行为以及宿主鸟发现外来蛋的概率。该算法由 Xin-She Yang 和 Suash Deb 在2009年提出。它结合了莱维飞行(Lévy flight)这一随机漫步模式,这种模式在自然界中被观察到是某些动物寻找食物时采用的方式,具有高效探索大范围空间的能力。之前把matlab的率定系数的转成java实现后,希望做成一个更加通用的模块,便有了此文。

1. 概述

CJH HUP Optimization Starter 是一个基于 Spring Boot 的优化算法工具包,支持从 Excel或 JSON 中读取数据,并通过多目标优化算法计算最优解。该工具包适用于水文分析、流量预测等场景。

2. 功能特性

  • 多数据源支持:支持 Excel 和 JSON 文件格式。

  • 多目标优化:基于 布谷鸟种群算法实现多目标优化。

  • 灵活配置:通过配置文件调整算法参数。

  • 高性能:优化后的算法执行速度快,适合大规模数据处理。

  • 易扩展:支持自定义数据源和优化算法


3.代码编写

3.1 设置项目结构

3.2定义默认的配置文件

cjh:hup:data-source-type: excel #目前支持excel和json 默认为excelnum-pop: 500 #种群规模num-obj: 2 #目标函数个数:随机不确定度、系统误差num-opt: 5 #优化变量个数low-limit: -10 #系数取值范围下限up-limit: 20 #系数取值范围上限h1: 173.9 #第一层声路高程h2: 175.4 #第二层声路高程h3: 177.1 #第三层声路高程h4: 178.6 #第四层声路高程w: 0.6 #河底流速系数max-gen: 1000 #最大迭代次数q_obs_col: 5 #流量数据所在列hs_col: 6 #hs数据所在列v-cols: [8, 9, 10, 11] #流速数据列lg-hs: #水位面积关系164.95: 0164.96: 0.01164.97: 0.02164.98: 0.03

3.3编写自动配置类

@Configuration
@EnableConfigurationProperties(HupProperties.class)
public class HupAutoConfiguration {public HupAutoConfiguration() {System.out.println("HupAutoConfiguration has been instantiated.");}@Bean@ConditionalOnMissingBeanpublic HupOptimizer hupOptimizer(HupProperties properties) {System.out.println("HupOptimizer has been instantiated.");return new HupOptimizer(properties);}@Bean@ConditionalOnMissingBeanpublic HupDataSource hupDataSource(HupProperties properties) {if ("json".equalsIgnoreCase(properties.getDataSourceType())) {System.out.println("YourAutoConfigurationClass has been instantiated and use type json");return new JsonDataSource();} else {System.out.println("YourAutoConfigurationClass has been instantiated and use type excel");return new ExcelDataSource();}}@Bean@ConditionalOnMissingBeanpublic HupService hupService(HupDataSource dataSource, HupOptimizer optimizer) {System.out.println("HupService has been instantiated." );return new HupService(dataSource, optimizer);}
}

3.4 定义数据源

定义父集数据源接口

public interface HupDataSource {Map<String, Object> loadData(String filePath) throws RuntimeException;default void validateData(Map<String, Object> data) {if (!data.containsKey("Q_obs") || !data.containsKey("v")) {throw new RuntimeException("Invalid data format");}}
}

json数据源

@Component
@ConditionalOnProperty(name = "cjh.hup.data-source-type", havingValue = "json")
public class JsonDataSource implements HupDataSource {@Overridepublic Map<String, Object> loadData(String jsonData) {try {FlowData flowData = JSONUtil.toBean(jsonData, FlowData.class, true);Map<String, Object> map = new HashMap<>();// 添加Q_obsmap.put("Q_obs", flowData.getQ_obs());map.put("Q_obs_len", flowData.getQ_obs().length);// 添加v,这里直接放入了double[][]类型的值,因为Java中的Object可以持有任何类型的数据map.put("v", flowData.getV());map.put("v_len", flowData.getV().length);// 添加Hsmap.put("Hs", flowData.getHs());map.put("Hs_len", flowData.getHs().length);validateData(map);return map;} catch (Exception e) {throw new RuntimeException("JSON data loading failed", e);}}
}
@Component
@ConditionalOnProperty(name = "cjh.hup.data-source-type", havingValue = "excel")
public class ExcelDataSource implements HupDataSource {@ResourceHupProperties properties;@Overridepublic Map<String, Object> loadData(String filePath) {try (ExcelReader reader = ExcelUtil.getReader(new File(filePath))) {List<List<Object>> data = reader.read();Map<String, Object> result = new HashMap<>();double[] doubles = extractColumn(data, properties.getQ_obs_col());result.put("Q_obs", doubles);result.put("Q_obs_len", doubles.length);double[][] doubles1 = extractColumns(data, properties.getvCols());result.put("v", doubles1);result.put("v_len", doubles1.length);double[] doubles2 = extractColumn(data, properties.getHs_col());result.put("Hs", doubles2);result.put("Hs_len", doubles2);validateData(result);return result;} catch (Exception e) {throw new RuntimeException("Excel data loading failed", e);}}// 实现列提取逻辑private static double[] extractColumn(List<List<Object>> data, int col) {double[] columnData = new double[data.size() - 1]; // 减去第一行标题for (int i = 1; i < data.size(); i++) { // 从第2行开始读取数据List<Object> row = data.get(i);if (row.get(col) instanceof Number) {columnData[i - 1] = ((Number) row.get(col)).doubleValue();} else {columnData[i - 1] = Double.NaN; // 如果非数值型数据则设为NaN}}return columnData;}// 从Excel中读取指定列(indices)数据// 实现多列提取逻辑public static double[][] extractColumns(List<List<Object>> data, int[] indices) {double[][] columnData = new double[indices.length][data.size() - 1]; // 减去第一行标题for (int i = 1; i < data.size(); i++) { // 从第2行开始读取数据List<Object> row = data.get(i);for (int j = 0; j < indices.length; j++) {int colIndex = indices[j];if (row.get(colIndex) instanceof Number) {columnData[j][i - 1] = ((Number) row.get(colIndex)).doubleValue();} else {columnData[j][i - 1] = Double.NaN; // 非数值型数据设置为NaN}}}return columnData;}
}

 3.5定义优化算法

public class HupOptimizer {private final HupProperties properties;public HupOptimizer(HupProperties properties) {this.properties = properties;}public OptimizationResult optimize(Map<String, Object> inputData) {Long t1 = System.currentTimeMillis();// 实现优化算法主体逻辑InitializeData initializeData = initializePopulation(inputData);double[][] nest = initializeData.getNest();double[] b = initializeData.getB();double[][] Q_shicha = initializeData.getQ_shicha();List<Object> Q_fenceng = initializeData.getQ_fenceng();int NUM_POP = properties.getNumPop();int NUM_OPT = properties.getNumOpt();int NUM_OBJ = properties.getNumObj();int q_len = (int) inputData.get("Q_obs_len");// 创建一个新的数组,大小为NUM_POP行目标+变量个+2数列double[][] expandedArray = new double[NUM_POP][NUM_OBJ + NUM_OPT +2 ];// 将原始数组的数据复制到新数组中for (int t = 0; t < NUM_POP; t++) {for (int j = 0; j < NUM_OBJ + NUM_OPT; j++) {expandedArray[t][j] = nest[t][j];}}//初始化新添加的列,这里设置为0,你可以根据需要设置为其他值for (int i = 0; i < NUM_POP; i++) {expandedArray[i][7] = 0;expandedArray[i][8] = 0;}expandedArray = nonDominationSort(expandedArray, NUM_OBJ, NUM_OPT);// 优化求解System.out.println("计算中...");double[][] optimizedNest = optimize(expandedArray, inputData);double[] temp = new double[NUM_OPT];for (int jj = 0; jj < NUM_POP; jj++) {for (int m = 0; m < NUM_OPT; m++) {temp[m] = optimizedNest[jj][m];}FobjData fobj = fobj(temp, inputData);double[] f = fobj.getF();double bx = fobj.getB();b[jj] = bx;double[] Q_shichatemp = fobj.getQ_shicha();double[][] Q_fencengTemp = fobj.getQ_fenceng();for (int t = 0; t < NUM_OBJ; t++) {optimizedNest[jj][NUM_OPT + t] = f[0];}Q_shicha[jj] = Q_shichatemp;Q_fenceng.add(jj, Q_fencengTemp);}// 提取优化变量部分double[][] nest_Pareto = new double[nest.length][NUM_OPT];for (int i = 0; i < nest.length; i++) {System.arraycopy(nest[i], 0, nest_Pareto[i], 0, NUM_OPT);}// 提取目标函数值部分double[][] f_Pareto = new double[nest.length][NUM_OBJ];for (int i = 0; i < nest.length; i++) {System.arraycopy(nest[i], NUM_OPT, f_Pareto[i], 0, NUM_OBJ);}// 构建 Pareto 矩阵double[][] Pareto = new double[nest.length][NUM_OPT + 1 + NUM_OBJ + q_len];for (int i = 0; i < nest.length; i++) {System.arraycopy(nest_Pareto[i], 0, Pareto[i], 0, NUM_OPT);Pareto[i][NUM_OPT] = b[i];System.arraycopy(Q_shicha[i], 0, Pareto[i], NUM_OPT + 1, q_len);System.arraycopy(f_Pareto[i], 0, Pareto[i], NUM_OPT + q_len + 1, NUM_OBJ);}// 寻找随机不确定度最小行及对应的结果double minF1 = Double.MAX_VALUE;int Y = -1;for (int i = 0; i < f_Pareto.length; i++) {if (f_Pareto[i][0] < minF1) {minF1 = f_Pareto[i][0];Y = i;}}double[] k_opt = Arrays.copyOf(nest_Pareto[Y], nest_Pareto[Y].length);double b_opt = b[Y];double f1_opt = f_Pareto[Y][0];double f2_opt = f_Pareto[Y][1];// 打印结果System.out.printf("k= %.2f %.2f %.2f %.2f %.2f, b= %.2f, 随机不确定度= %.4f, 系统误差= %.4f\n",k_opt[0], k_opt[1], k_opt[2], k_opt[3], k_opt[4], b_opt, f1_opt, f2_opt);OptimizationResult result = new OptimizationResult();result.setB(b_opt);result.setK(k_opt);result.setUncertainty(f1_opt);result.setSystemError(f2_opt);Long t2 = System.currentTimeMillis();System.out.println("耗时:" + (t2 - t1) / 1000 + "s");return result;}// 优化求解private double[][] optimize(double[][] nest, Map<String, Object> simuPara) {int gen = 0;int MAX_GEN = properties.getMaxGen();int NUM_POP = properties.getNumPop();int NUM_OPT = properties.getNumOpt();int NUM_OBJ = properties.getNumObj();while (gen < MAX_GEN) {gen++;double pa = 0.5 - gen * (0.5 - 0.05) / MAX_GEN;double[][] newNest = emptyNests(nest, pa, simuPara);double[][] Tempnest = verticalConcatenate(nest, newNest);Tempnest = nonDominationSort(Tempnest, NUM_OBJ, NUM_OPT);nest = replace(Tempnest, NUM_OBJ, NUM_OPT, NUM_POP);}return nest;}//预生成所有随机数private final ThreadLocal<Random> random = ThreadLocal.withInitial(Random::new);private InitializeData initializePopulation(Map<String, Object> data) {InitializeData resultData = new InitializeData();int NUM_POP = properties.getNumPop();int NUM_OPT = properties.getNumOpt();int NUM_OBJ = properties.getNumObj();double LOW_LIMIT = properties.getLowLimit();double UP_LIMIT = properties.getUpLimit();double[][] nest = new double[NUM_POP][NUM_OPT + NUM_OBJ];double[][] Q_shicha = new double[NUM_POP][(int) data.get("Q_obs_len")];List<Object> Q_fenceng = new ArrayList<>();double[] b = new double[NUM_POP];// 并行初始化种群IntStream.range(0, NUM_POP).parallel().forEach(i -> {double[] KK = new double[NUM_OPT];Random rand = random.get();for (int j = 0; j < NUM_OPT; j++) {KK[j] = LOW_LIMIT + rand.nextDouble() * (UP_LIMIT - LOW_LIMIT);}System.arraycopy(KK, 0, nest[i], 0, KK.length);FobjData fobjData = fobj(KK, data);System.arraycopy(fobjData.getF(), 0, nest[i], NUM_OPT, NUM_OBJ);});resultData.setNest(nest);resultData.setQ_shicha(Q_shicha);resultData.setQ_fenceng(Q_fenceng);resultData.setB(b);return resultData;}private double[][] emptyNests(double[][] oldNest, double pa, Map<String, Object> simuPara) {int m = properties.getNumObj(); // 目标函数个数int nd = properties.getNumOpt(); // 优化变量的个数// 深拷贝 oldNest 到 newNestdouble[][] newNest = new double[oldNest.length][oldNest[0].length];for (int i = 0; i < oldNest.length; i++) {System.arraycopy(oldNest[i], 0, newNest[i], 0, oldNest[i].length);}double[][] nest = new double[oldNest.length][nd];// 复制 oldNest 到 newNest 并提取优化变量for (int i = 0; i < oldNest.length; i++) {System.arraycopy(oldNest[i], 0, nest[i], 0, nd);}int n = nest.length;Random rand = new Random();// 生成布尔矩阵 Kboolean[][] K = new boolean[n][nd];for (int i = 0; i < K.length; i++) {for (int j = 0; j < K[i].length; j++) {K[i][j] = rand.nextDouble() > pa;}}// 找到每行中 true 值的数量等于 nd 的行索引List<Integer> x = new ArrayList<>();for (int i = 0; i < K.length; i++) {int count = 0;for (int j = 0; j < K[i].length; j++) {if (K[i][j]) {count++;}}if (count == nd) {x.add(i);}}// 修改布尔矩阵 K 中特定行的最后一列for (int row : x) {K[row][K[row].length - 1] = false;}// 生成布尔矩阵 Lboolean[][] L = new boolean[K.length][K[0].length];for (int i = 0; i < K.length; i++) {for (int j = 0; j < K[i].length; j++) {L[i][j] = !K[i][j];}}// 计算每行中 0 的数量int[] count = new int[K.length];for (int i = 0; i < K.length; i++) {int rowCount = 0;for (int j = 0; j < K[i].length; j++) {if (L[i][j]) {rowCount++;}}count[i] = rowCount;}// 生成随机排列List<Integer> indices = new ArrayList<>();for (int i = 0; i < n; i++) {indices.add(i);}Collections.shuffle(indices);int[] rand1 = indices.stream().mapToInt(Integer::intValue).toArray();Collections.shuffle(indices);int[] rand2 = indices.stream().mapToInt(Integer::intValue).toArray();// 生成随机缩放因子double randNum = rand.nextDouble();// 计算 deltadouble[][] delta = new double[n][nd];for (int i = 0; i < n; i++) {for (int j = 0; j < nd; j++) {delta[i][j] = randNum * (nest[rand1[i]][j] - nest[rand2[i]][j]);}}// 计算 stepsize1double[][] stepsize1 = new double[n][nd];for (int i = 0; i < n; i++) {for (int j = 0; j < nd; j++) {stepsize1[i][j] = K[i][j] ? delta[i][j] : 0.0;}}// 计算 delta2double[] delta2 = new double[n];for (int i = 0; i < n; i++) {double sum = 0.0;for (int j = 0; j < nd; j++) {sum += stepsize1[i][j];}delta2[i] = sum / count[i];}// 计算 stepsize2double[][] stepsize2 = new double[nest.length][nest[0].length];for (int i = 0; i < nest.length; i++) {for (int j = 0; j < nest[0].length; j++) {stepsize2[i][j] = delta2[i] * (L[i][j] ? 1.0 : 0.0);}}// 计算最终步长 stepsizedouble[][] stepsize = new double[nest.length][nest[0].length];for (int i = 0; i < nest.length; i++) {for (int j = 0; j < nest[0].length; j++) {stepsize[i][j] = stepsize1[i][j] - stepsize2[i][j];}}// 更新巢穴位置+stepsize new_nestdouble[][] new_nest = new double[nest.length][nest[0].length];for (int i = 0; i < nest.length; i++) {for (int j = 0; j < nest[0].length; j++) {new_nest[i][j] = nest[i][j] + stepsize[i][j];}}//fobj计算2个函数值for (int i = 0; i < n; i++) {for (int j = 0; j < nd; j++) {newNest[i][j] = nest[i][j] + stepsize1[i][j];}// 评估 new nest 并存储结果double[] f = fobj(newNest[i], simuPara).getF();System.arraycopy(f, 0, newNest[i], nd, m);}return newNest;}private FobjData fobj(double[] KK, Map<String, Object> data) {FobjData fobjData = new FobjData();// 初始化目标函数值double[] f = new double[2];// 参数传递double[][] lgHS = DataUtil.mapToArray(properties.getLgHs());double[] Q_obs = (double[]) data.get("Q_obs");double[][] v = (double[][]) data.get("v");double[] Hs = (double[]) data.get("Hs");double H1 = properties.getH1();double H2 = properties.getH2();double H3 = properties.getH3();double H4 = properties.getH4();double w = properties.getW();int n = (int) data.get("Q_obs_len");double lowlimit = properties.getLowLimit();double uplimit = properties.getUpLimit();double[] k = KK; // matlab中kk是一个一行NUM_OPT列的数组 经过转秩变成了一个 NUM_OPT列1行的数组// 推算流量double[] v0 = new double[v[0].length];for (int i = 0; i < v[0].length; i++) {v0[i] = w * v[0][i]; // 计算v0}double[] rows0 = exColsWithNum(lgHS, 0);double[] rows1 = exColsWithNum(lgHS, 1);double[] S0 = chazhi(rows0, rows1, new double[]{H1});double[] S1 = subtract(chazhi(rows0, rows1, new double[]{H2}), S0);double[] S2 = subtract(chazhi(rows0, rows1, new double[]{H3}), chazhi(rows0, rows1, new double[]{H2}));double[] S3 = subtract(chazhi(rows0, rows1, new double[]{H4}), chazhi(rows0, rows1, new double[]{H3}));double[] S4 = subtract(chazhi(rows0, rows1, Hs), chazhi(rows0, rows1, new double[]{H4}));double[] Q0 = new double[v0.length];double[] Q1 = new double[v[0].length];double[] Q2 = new double[v[1].length];double[] Q3 = new double[v[2].length];double[] Q4 = new double[v[3].length];for (int i = 0; i < v0.length; i++) {Q0[i] = (v0[i] + v[0][i]) * 0.5 * S0[0];}for (int i = 0; i < Q1.length; i++) {Q1[i] = (v[0][i] + v[1][i]) * 0.5 * S1[0];}for (int i = 0; i < Q2.length; i++) {Q2[i] = (v[1][i] + v[2][i]) * 0.5 * S2[0];}for (int i = 0; i < Q3.length; i++) {Q3[i] = (v[2][i] + v[3][i]) * 0.5 * S3[0];}for (int i = 0; i < Q4.length; i++) {Q4[i] = v[3][i] * S4[i];}// 合并流量double[][] Q_fenceng = {Q0, Q1, Q2, Q3, Q4};double[] Q_shicha = new double[Q_fenceng[0].length]; // 按列求和,初始化为零的数组double[][] tempx = new double[Q_fenceng.length][Q_fenceng[0].length]; // 按列求和,初始化为零的数组// 构建点乘矩阵 合并为单层循环int rows = Q_fenceng.length;int cols = Q_fenceng[0].length;for (int i = 0; i < rows * cols; i++) {int j = i / cols;int kIndex = i % cols;tempx[j][kIndex] = k[j] * Q_fenceng[j][kIndex];}//按列求和for (int i = 0; i < Q_shicha.length; i++) {Q_shicha[i] = Arrays.stream(exColsWithNum(tempx, i)).sum();}// 计算误差均值,修正推算流量//计算 Q_obs 和 Q_shicha 之间的差的平均值double sumDifference = 0.0;for (int i = 0; i < Q_obs.length; i++) {sumDifference += Q_obs[i] - Q_shicha[i];}double b = sumDifference / Q_obs.length;//更新 Q_shichafor (int i = 0; i < Q_shicha.length; i++) {Q_shicha[i] += b;}// 增加惩罚函数double penalty = 0.0;int s1 = 0;int s2 = 0;for (double ki : k) {if (ki < lowlimit) {s1 += 1;}if (ki > uplimit) {s2 += 1;}}if (s1 > 0 || s2 > 0) {penalty = 100;}// 计算两个目标函数f[0] = 2 * Math.sqrt(sumSquaredError(Q_obs, Q_shicha) / (n - 2)) + penalty; //随机不确定度f[1] = Math.abs(sumRelativeError(Q_obs, Q_shicha) / n) + penalty; // 系统误差fobjData.setF(f);fobjData.setB(b);fobjData.setQ_fenceng(Q_fenceng);fobjData.setQ_shicha(Q_shicha);return fobjData;}// 辅助方法:计算平方误差和private static double sumSquaredError(double[] Q_obs, double[] Q_shicha) {double sum = 0.0;for (int i = 0; i < Q_obs.length; i++) {sum += Math.pow((Q_obs[i] - Q_shicha[i]) / Q_shicha[i], 2);}return sum;}// 辅助方法:计算相对误差之和private static double sumRelativeError(double[] Q_obs, double[] Q_shicha) {double sum = 0.0;for (int i = 0; i < Q_obs.length; i++) {sum += (Q_obs[i] - Q_shicha[i]) / Q_shicha[i];}return sum;}public static double[] exColsWithNum(double[][] matrix, int columnIndex) {// 创建一个一维数组来存储列向量double[] columnVector = new double[matrix.length];// 遍历二维数组的行,提取指定列的元素for (int i = 0; i < matrix.length; i++) {columnVector[i] = matrix[i][columnIndex];}return columnVector;}/*** 按对应位置相减两个 double[] 数组,并返回结果数组。* 如果其中一个数组只有一个元素,则将该元素与另一个数组的每个元素进行相减。** @param a 第一个 double[] 数组* @param b 第二个 double[] 数组* @return 按对应位置相减后的结果数组*/public static double[] subtract(double[] a, double[] b) {int lengthA = a.length;int lengthB = b.length;double[] result;if (lengthA == 1 && lengthB > 1) {// a 只有一个元素,b 有多个元素result = new double[lengthB];for (int i = 0; i < lengthB; i++) {result[i] = a[0] - b[i];}} else if (lengthB == 1 && lengthA > 1) {// b 只有一个元素,a 有多个元素result = new double[lengthA];for (int i = 0; i < lengthA; i++) {result[i] = a[i] - b[0];}} else {// 两个数组都有多个元素,或者都只有一个元素int minLength = Math.min(lengthA, lengthB);result = new double[minLength];for (int i = 0; i < minLength; i++) {result[i] = a[i] - b[i];}}return result;}// chazhi 方法,根据 x 和 y 进行线性插值 二分插值优化private static double[] chazhi(double[] x, double[] y, double[] x0) {double[] z = new double[x0.length];for (int jj = 0; jj < x0.length; jj++) {double xc = x0[jj];if (xc <= x[0]) {z[jj] = y[0];continue;}if (xc >= x[x.length - 1]) {z[jj] = y[y.length - 1];continue;}// 使用二分查找替代线性搜索int idx = Arrays.binarySearch(x, xc);if (idx >= 0) {z[jj] = y[idx];} else {int insertionPoint = -idx - 1;int ii = insertionPoint - 1;double weight = (xc - x[ii]) / (x[ii + 1] - x[ii]);z[jj] = y[ii] * (1 - weight) + y[ii + 1] * weight;}}return z;}// 主函数:非支配排序和拥挤距离计算public static double[][] nonDominationSort(double[][] x, int M, int V) {int N = x.length;// 存储不同层级的非支配解集ArrayList<ArrayList<Integer>> F = new ArrayList<>();F.add(new  ArrayList<>());// 创建个体数组Individual[] individuals = new Individual[N];// 初始化个体信息for (int i = 0; i < N; i++) {individuals[i] = new Individual();for (int j = 0; j < N; j++) {// 比较个体 i 和个体 j 的支配关系int domLess = 0, domEqual = 0, domMore = 0;for (int k = 0; k < M; k++) {if (x[i][V + k] < x[j][V + k]) {domLess++;} else if (x[i][V + k] == x[j][V + k]) {domEqual++;} else {domMore++;}}if (domLess == 0 && domEqual!= M) {individuals[i].n++;}if (domMore == 0 && domEqual!= M) {individuals[i].p.add(j);}}if (individuals[i].n == 0) {x[i][M + V] = 0;F.get(0).add(i);}}// 分层处理非支配解集int front = 0;while (!F.get(front).isEmpty())  {ArrayList<Integer> Q = new ArrayList<>();for (int i : F.get(front))  {for (int j : individuals[i].p) {if (--individuals[j].n == 0) {x[j][M + V] = front + 1;Q.add(j);}}}front++;F.add(Q);}// 根据层级排序double[][] sortedBasedOnFront = Arrays.copyOf(x,  N);Arrays.sort(sortedBasedOnFront,  Comparator.comparingDouble(o  -> o[M + V]));// 结果矩阵,包含层级和拥挤距离ArrayList<double[]> z = new ArrayList<>();int currentIndex = 0;// 遍历所有层级(除了最后一个空层级)for (int f = 0; f < F.size()  - 1; f++) {int frontSize = F.get(f).size();double[][] y = new double[frontSize][M + V + 1 + M];// 将当前层级的个体复制到 y 矩阵for (int i = 0; i < frontSize; i++) {System.arraycopy(sortedBasedOnFront[currentIndex  + i], 0, y[i], 0, sortedBasedOnFront[0].length);}currentIndex += frontSize;// 计算每个目标的拥挤距离for (int i = 0; i < M; i++) {final int columnToSort = V + i;// 根据第 i 个目标排序Integer[] indexOfObjectives = new Integer[frontSize];for (int j = 0; j < frontSize; j++) {indexOfObjectives[j] = j;}Arrays.sort(indexOfObjectives,  Comparator.comparingDouble(a  -> y[a][columnToSort]));double fMax = y[indexOfObjectives[frontSize - 1]][columnToSort];double fMin = y[indexOfObjectives[0]][columnToSort];// 设置边界个体的拥挤距离为无穷大y[indexOfObjectives[frontSize - 1]][M + V + 1 + i] = Double.POSITIVE_INFINITY;y[indexOfObjectives[0]][M + V + 1 + i] = Double.POSITIVE_INFINITY;// 计算中间个体的拥挤距离for (int j = 1; j < frontSize - 1; j++) {double nextObj = y[indexOfObjectives[j + 1]][columnToSort];double prevObj = y[indexOfObjectives[j - 1]][columnToSort];if (fMax - fMin == 0) {y[indexOfObjectives[j]][M + V + 1 + i] = Double.POSITIVE_INFINITY;} else {y[indexOfObjectives[j]][M + V + 1 + i] = (nextObj - prevObj) / (fMax - fMin);}}}// 计算总拥挤距离double[] distance = new double[frontSize];for (int i = 0; i < M; i++) {for (int j = 0; j < frontSize; j++) {distance[j] += y[j][M + V + 1 + i];}}// 将总拥挤距离存入 y 矩阵for (int j = 0; j < frontSize; j++) {y[j][M + V + 1] = distance[j];}// 截取 y 矩阵的前 M + V + 2 列double[][] yTruncated = new double[frontSize][M + V + 2];for (int i = 0; i < frontSize; i++) {System.arraycopy(y[i],  0, yTruncated[i], 0, M + V + 2);}// 将 yTruncated 复制到 z 矩阵appendToZ(z, yTruncated, currentIndex - frontSize, currentIndex);}return convertTo2DArray(z);}// 将 ArrayList 转换为二维数组public static double[][] convertTo2DArray(ArrayList<double[]> list) {double[][] array = new double[list.size()][];for (int i = 0; i < list.size();  i++) {array[i] = list.get(i);}return array;}// 将 y 矩阵的内容复制到 z 矩阵的指定位置public static void appendToZ(ArrayList<double[]> z, double[][] y, int previousIndex, int currentIndex) {while (z.size()  < currentIndex) {z.add(new  double[y[0].length]);}for (int i = 0; i < y.length;  i++) {System.arraycopy(y[i],  0, z.get(previousIndex  + i), 0, y[i].length);}}// 个体类,存储个体的被支配次数和支配的个体索引列表private static class Individual {int n = 0;ArrayList<Integer> p = new ArrayList<>();}private static double[][] replace(double[][] nest, int numObj, int numOpt, int pop) {int N = nest.length;double[][] sortedChromosome = Arrays.copyOf(nest, N); // 复制原始数据Arrays.sort(sortedChromosome, Comparator.comparingDouble(o -> o[numObj + numOpt])); // 根据层级排序// 找到当前种群中等级最大的个体double maxRank = sortedChromosome[N - 1][numObj + numOpt];// 根据排名和拥挤距离选择个体,直到整个种群规模达到规定值double[][] f = new double[pop][numObj + numOpt + 2];int previousIndex = 0;for (int i = 1; i <= maxRank; i++) {int currentIndex = 0;final int tempI = i;double[] column = exColsWithNum(sortedChromosome, numObj + numOpt);// 找到等于 tempI 的最大索引currentIndex = IntStream.range(0, column.length).filter(j -> column[j] == tempI).max().orElse(-1) - 1;if (currentIndex > pop) { // 如果当前索引大于保留种群规模限制int remaining = pop - previousIndex;double[][] tempPop = Arrays.copyOfRange(sortedChromosome, previousIndex, currentIndex);Integer[] tempSortIndex = new Integer[tempPop.length];for (int j = 0; j < tempPop.length; j++) {tempSortIndex[j] = j;}Arrays.sort(tempSortIndex, (i1, i2) -> Double.compare(tempPop[i2][numObj + numOpt + 1], tempPop[i1][numObj + numOpt + 1])); // 按拥挤距离降序排列for (int j = 0; j < remaining; j++) {f[previousIndex + j] = tempPop[tempSortIndex[j]];}return f; // 将保留的种群作为选择结果输出} else if (currentIndex < pop) { // 如果当前位置小于最大规模限制for (int j = previousIndex; j < currentIndex; j++) {f[j] = sortedChromosome[j];}} else {for (int j = previousIndex; j < currentIndex; j++) {f[j] = sortedChromosome[j];}return f; // 如果当前位置等于最大规模,直接返回}previousIndex = currentIndex;}return f; // 返回最终的种群}// 垂直拼接矩阵public static double[][] verticalConcatenate(double[][] matrix1, double[][] matrix2) {//直接复制二维数组double[][] result = Arrays.copyOf(matrix1, matrix1.length + matrix2.length);System.arraycopy(matrix2, 0, result, matrix1.length, matrix2.length);return result;}
}

 3.6定义服务类

public class HupService {private final HupDataSource dataSource;private final HupOptimizer optimizer;public HupService(HupDataSource dataSource, HupOptimizer optimizer) {this.dataSource = dataSource;this.optimizer = optimizer;}public OptimizationResult performOptimization(String filePath) {Map<String, Object> data = dataSource.loadData(filePath);return optimizer.optimize(data);}
}

4. 快速开始

4.1 添加依赖

pom.xml 中添加以下依赖:

<dependency><groupId>com.cjh</groupId><artifactId>cjh-hup-starter</artifactId><version>1.0.0</version>
</dependency>

4.2 配置文件

application.yml 中配置算法参数默认参数如下:

cjh:hup:data-source-type: excel #目前支持excel和json 默认为excelnum-pop: 500 #种群规模num-obj: 2 #目标函数个数:随机不确定度、系统误差num-opt: 5 #优化变量个数low-limit: -10 #系数取值范围下限up-limit: 20 #系数取值范围上限h1: 173.9 #第一层声路高程h2: 175.4 #第二层声路高程h3: 177.1 #第三层声路高程h4: 178.6 #第四层声路高程w: 0.6 #河底流速系数max-gen: 1000 #最大迭代次数q_obs_col: 5 #流量数据所在列hs_col: 6 #hs数据所在列v-cols: [8, 9, 10, 11] #流速数据列lg-hs: #水位面积关系164.95: 0164.96: 0.01164.97: 0.02164.98: 0.03

4.3 数据格式

Excel 文件
  • 文件名:1000-2000.xlsx

  • 数据格式:

    • 第5列:实测流量 Q_obs

    • 第8-11列:四层流速 v

    • 第6列:水位 Hs

JSON 数据
  • 数据格式:

{"Q_obs":[1430.0, 1010.0, 1450.0, 1200.0, 1470.0, 1990.0, 1860.0, 1550.0, 1600.0, 1520.0, 1780.0, 1650.0, 1440.0, 1530.0, 1340.0, 1580.0],"v": [[0.4825, 0.398, 0.628, 0.422, 0.664, 0.6522, 0.606, 0.5292, 0.534, 0.5111, 0.586, 0.56, 0.485, 0.5106, 0.46, 0.5373],[0.5, 0.374, 0.54, 0.46, 0.538, 0.6722, 0.664, 0.55, 0.55, 0.5363, 0.6, 0.5833, 0.5033, 0.52, 0.478, 0.5545],[0.515, 0.374, 0.55, 0.48, 0.562, 0.72, 0.67, 0.5633, 0.57, 0.5474, 0.614, 0.6017, 0.53, 0.5589, 0.482, 0.56],[0.525, 0.388, 0.54, 0.5, 0.574, 0.7611, 0.69, 0.56, 0.58, 0.5853, 0.524, 0.6, 0.54, 0.5667, 0.508, 0.5691]],"Hs":[179.59, 179.18, 179.18, 178.92, 179.58, 179.72, 179.59, 179.42, 179.65, 179.4, 179.68, 179.49, 179.43, 179.56, 179.18, 179.48]
}
水位面积关系线使用map配置
  • 数据配置为:

cjh:hup:lg-hs: #水位面积关系164.95: 0164.96: 0.01164.97: 0.02164.98: 0.03164.99: 0.04165: 0.07165.01: 0.09

5. 核心 API

5.1 HupService 服务类

方法说明
  • performOptimization(String dataPath)

    • 功能:执行优化算法

    • 参数:

      • dataPath:数据文件路径或者是数据(支持 Excel、JSON)。

    • 返回值:OptimizationResult,包含优化结果。

示例代码
@RestController
@RequestMapping("/optimization")
public class OptimizationController {
​@Autowiredprivate HupService hupService;
​@PostMappingpublic ResponseEntity<OptimizationResult> optimize(@RequestParam String dataPath) {try {OptimizationResult result = hupService.performOptimization(dataPath);return ResponseEntity.ok(result);} catch (DataProcessingException e) {return ResponseEntity.badRequest().build();}}
}

5.2 OptimizationResult 结果类

字段说明
  • k:优化变量系数(double[])。

  • b:偏差值(double)。

  • uncertainty:随机不确定度(double)。

  • systemError:系统误差(double)。

示例输出
{"k": [1.23, 2.34, 3.45, 4.56, 5.67],"b": 0.12,"uncertainty": 0.05,"systemError": 0.02
}

6. 扩展功能

6.1 自定义数据源

  1. 实现 DataSource 接口:

@Component
public class CustomDataSource implements DataSource {@Overridepublic Map<String, Object> loadData(String filePath) {// 自定义数据加载逻辑}
}
  1. 在配置文件中指定数据源类型:

cjh:hup:data-source-type: custom

6.2 自定义优化算法

  1. 继承 HupOptimizer 类:

@Component
public class CustomOptimizer extends HupOptimizer {@Overridepublic OptimizationResult optimize(Map<String, Object> inputData) {// 自定义优化逻辑}
}
  1. 在配置文件中指定优化器:

cjh:hup:optimizer: customOptimizer

7. 性能优化建议

  • 数据预处理:确保输入数据格式正确,避免无效数据。

  • 并行计算:启用多线程处理大规模数据。

  • JVM 调优:调整堆内存和垃圾回收器参数。


8. 常见问题

8.1 数据文件加载失败

  • 原因:文件路径错误或格式不正确。

  • 解决方案:检查文件路径和格式是否符合要求。

8.2 优化结果不理想

  • 原因算法参数配置不当。

  • 解决方案:调整 num-popmax-gen 等参数。

9.代码测试与下载

9.1代码测试

 本地测试

@RunWith(SpringRunner.class)
@SpringBootTest
public class CjhHupSpringBootStarterApplicationTests {@Autowiredprivate HupService hupService;@Testpublic void testExcelOptimization() {OptimizationResult optimizationResult = hupService.performOptimization("D:\\waterConservancy\\CJH_HUP_JAVA -dev\\src\\test\\resources\\RLine\\excel\\1000-2000.xls");System.out.println(optimizationResult.toString());}@Testpublic void testJsonOptimization() {OptimizationResult optimizationResult = hupService.performOptimization("{\"Q_obs\":[1430.0,1010.0,1450.0,1200.0,1470.0,1990.0,1860.0,1550.0,1600.0,1520.0,1780.0,1650.0,1440.0,1530.0,1340.0,1580.0],\"v\":[[0.4825,0.398,0.628,0.422,0.664,0.6522,0.606,0.5292,0.534,0.5111,0.586,0.56,0.485,0.5106,0.46,0.5373],[0.5,0.374,0.54,0.46,0.538,0.6722,0.664,0.55,0.55,0.5363,0.6,0.5833,0.5033,0.52,0.478,0.5545],[0.515,0.374,0.55,0.48,0.562,0.72,0.67,0.5633,0.57,0.5474,0.614,0.6017,0.53,0.5589,0.482,0.56],[0.525,0.388,0.54,0.5,0.574,0.7611,0.69,0.56,0.58,0.5853,0.524,0.6,0.54,0.5667,0.508,0.5691]],\"Hs\":[179.59,179.18,179.18,178.92,179.58,179.72,179.59,179.42,179.65,179.4,179.68,179.49,179.43,179.56,179.18,179.48]}");System.out.println(optimizationResult.toString());}
}

引用测试时

在项目中引入starter如下:

编写测试control如下:

@Controller
public class BasicController {@Resourceprivate  HupService hupService;@RequestMapping("/hello")@ResponseBodypublic String hello() {OptimizationResult result = hupService.performOptimization("D:\\waterConservancy\\CJH_HUP_JAVA -dev\\src\\test\\resources\\RLine\\excel\\1000-2000.xls");System.out.println(result.toString());return result.toString();}
}

结果如下:

9.2代码下载

https://download.csdn.net/download/u012440725/90355887


http://www.ppmy.cn/news/1571002.html

相关文章

CSS(三)less一篇搞定

目录 一、less 1.1什么是less 1.2Less编译 1.3变量 1.4混合 1.5嵌套 1.6运算 1.7函数 1.8作用域 1.9注释与导入 一、less 1.1什么是less 我们写了这么久的CSS,里面有很多重复代码&#xff0c;包括通配颜色值、容器大小。那我们能否通过js声明变量来解决这些问题&…

DeepSeek-V3:开源多模态大模型的突破与未来

目录 引言 一、DeepSeek-V3 的概述 1.1 什么是 DeepSeek-V3&#xff1f; 1.2 DeepSeek-V3 的定位 二、DeepSeek-V3 的核心特性 2.1 多模态能力 2.2 开源与可扩展性 2.3 高性能与高效训练 2.4 多语言支持 2.5 安全与伦理 三、DeepSeek-V3 的技术架构 3.1 模型架构 3…

Android 稳定性优化总结

对稳定性的理解 应用稳定性是最重要的性能指标之一&#xff0c;是APP质量构建体系中的基本盘&#xff0c;如果应用的稳定性出现问题&#xff0c;对产品、用户造成的伤害将是致命的。本文将从以下几个方面对应用稳定性优化进行整理。 需要说明&#xff0c;广义的稳定性不仅仅是…

Ubuntu下npm运行报错Error: Cannot find module ‘node:path‘

执行了apt install npm安装了npm&#xff0c;然后又执行 npm install -g npm更新了一下&#xff0c;执行 npm run serve 出现奇怪现象&#xff0c;在安装npm的终端里执行这个命令就可以运行&#xff0c;再打开一个新的终端在同样的环境下执行这个命令就是报错&#xff0c;执行…

百度的冰桶算法

百度的冰桶算法&#xff08;Ice Bucket Algorithm&#xff09;是百度搜索引擎用于打击低质量内容的一种算法。该算法主要针对那些通过大量堆砌关键词、内容质量低下、用户体验差的网页进行惩罚&#xff0c;从而提升搜索结果的质量。 冰桶算法的核心目标&#xff1a; 打击低质…

win10 llamafactory模型微调相关① || Ollama运行微调模型

目录 微调相关 1.微调结果评估 2.模型下载到本地 导出转换&#xff0c;Ollama运行 1.模型转换&#xff08;非常好的教程&#xff01;&#xff09; 2.Ollama 加载GGUF模型文件 微调相关 1.微调结果评估 【06】LLaMA-Factory微调大模型——微调模型评估_llamafactory评估-C…

Docker 数据卷(Volume)详细介绍

Docker 数据卷&#xff08;Volume&#xff09;详细介绍 1. 什么是 Docker 数据卷&#xff1f; Docker 数据卷&#xff08;Volume&#xff09;是一种用于 持久化数据 和 容器间数据共享 的机制。由于容器的存储是临时的&#xff0c;容器删除后其中的数据会丢失&#xff0c;因此…

让文物“活”起来,以3D数字化技术传承文物历史文化!

文物&#xff0c;作为不可再生的宝贵资源&#xff0c;其任何毁损都是无法逆转的损失。然而&#xff0c;当前文物保护与修复领域仍大量依赖传统技术&#xff0c;同时&#xff0c;文物管理机构和专业团队的力量相对薄弱&#xff0c;亟需引入数字化管理手段以应对挑战。 积木易搭…