1. 建立融合动态不确定性因素的绿色施工多目标集成优化数学模型
在传统工期、成本、质量模型基础上,深化了安全、环境、资源目标的量化建模。安全目标不仅考虑静态的安全投入成本,还引入基于作业风险动态评估的安全绩效指数,该指数与施工活动类型、强度、交叉作业复杂度以及实时监测数据相关联。环境目标细化为碳排放、水耗、废弃物产生量、噪声污染、光污染等多个子指标,并利用生命周期评估方法,将建筑材料生产运输、现场施工、后期处置的全过程环境影响纳入模型。资源目标则综合考虑了水资源、能源及主要建筑材料的消耗强度与循环利用率。
关键创新在于模型引入了动态不确定性因素,如天气变化导致的工序效率波动、原材料市场价格浮动、设备突发故障风险等。采用情景树或鲁棒优化思想,将部分关键参数设为在一定范围内波动的随机变量,优化目标则是在最坏情况或期望情况下的综合表现。这使得模型更贴近实际施工管理的动态性和不确定性,优化结果更具鲁棒性。
2. 提出一种融合多策略自适应交叉变异与局部搜索增强的改进 NSGA-Ⅱ算法
标准 NSGA-Ⅱ的模拟二进制交叉和多项式变异参数固定,在高维复杂搜索空间中适应性不足。设计了一种多策略自适应交叉变异机制。算法维护一个交叉变异算子池,包括模拟二进制交叉、差分进化类交叉、基于方向的交叉等。在每一代,根据每个个体所处的进化状态,为其动态选择一个最有可能提升其性能的交叉算子。
变异概率和强度也与个体的秩和拥挤度自适应关联:对于前沿靠后或拥挤度大的个体,赋予较高的变异概率和较大的变异步长以增强探索;对于前沿靠前且拥挤度小的个体,则采用较低的变异概率和精细的变异进行深度开发。针对 NSGA-Ⅱ局部搜索能力弱的缺点,引入了一个周期性触发的局部搜索模块。每隔一定代数,从当前非支配解集中选取部分代表性个体,在其决策变量邻域内进行小规模的、目标导向的局部搜索。

通过'全局进化搜索为主,周期性局部精细搜索为辅'的策略,算法能更有效地逼近真实帕累托前沿的每一个细节区域。
3. 设计基于熵权 TOPSIS 与模糊决策的高维解集后处理方法
改进算法最终会生成一个包含数十甚至上百个非支配解的高维帕累托前沿(六个目标)。如何从这个高维解集中选择最终实施方案,对决策者是一个挑战。开发了一套系统的后处理与决策支持流程。
首先,采用主成分分析或 t-SNE 等降维技术,将六维目标空间投影到二维或三维空间进行可视化,帮助决策者直观理解解集的整体分布和权衡关系。其次,为解决高维目标间量纲和重要性不同的问题,提出一种结合熵权法与 TOPSIS(逼近理想解排序法)的自动筛选方法。熵权法根据各个目标值在解集中的离散程度客观计算其权重。然后,基于这些权重,使用 TOPSIS 方法计算每个解与正理想解和负理想解的相对接近度,并对所有解进行排序。
最后,对于最终抉择,引入模糊决策理论。邀请决策者用自然语言描述对各目标的偏好模糊区间,将这些模糊偏好转化为目标空间的模糊隶属度函数,然后计算每个候选方案对这些模糊偏好的综合满足度。最终推荐综合满足度最高的方案,或者将满足度与 TOPSIS 排序结合,为决策者提供一个清晰的、融合了客观数据与主观偏好的决策参考。
import numpy as np
class GreenConstructionProject:
def __init__(self, num_activities, num_resources):
self.num_act = num_activities
self.num_res = num_resources
self.duration = np.random.randint(5, 30, num_activities)
self.cost_per_day = np.random.rand(num_activities) * 500 +
.predecessors = .generate_precedence()
.resource_need = np.random.rand(num_activities, num_resources) *
.resource_cost = np.random.rand(num_resources) * +
.quality_impact = np.random.rand(num_activities) *
.safety_risk = np.random.rand(num_activities) *
.emission_per_day = np.random.rand(num_activities) *
.waste_per_day = np.random.rand(num_activities) *
.quality_weight =
.safety_weight =
.emission_weight =
.waste_weight =
.resource_weight =
():
pred = [[] _ (.num_act)]
i (, .num_act):
possible_pred = ((i))
num_pred = np.random.randint(, (, i+))
chosen_pred = np.random.choice(possible_pred, num_pred, replace=)
pred[i] = (chosen_pred)
pred
():
start_times = chrom *
start_times
():
start_times = .decode_schedule(chrom)
end_times = start_times + .duration
makespan = np.(end_times)
total_direct_cost = np.(.cost_per_day * .duration)
peak_resources = np.zeros(.num_res)
daily_resource = np.zeros(((np.ceil(makespan)), .num_res))
i (.num_act):
s = (start_times[i])
e = (end_times[i])
t (s, e):
t < (daily_resource):
daily_resource[t, :] += .resource_need[i, :]
peak_resources = np.(daily_resource, axis=)
resource_cost = np.(peak_resources * .resource_cost) *
total_cost = total_direct_cost + resource_cost
quality_score =
i (.num_act):
q_impact = .quality_impact[i]
start_times[i] < np.mean(start_times):
quality_score += q_impact *
:
quality_score += q_impact *
normalized_quality = / ( + quality_score)
safety_score =
max_concurrent =
t ((daily_resource)):
concurrent_act = np.((start_times <= t) & (end_times > t))
max_concurrent = (max_concurrent, concurrent_act)
safety_score += np.(.safety_risk[np.where((start_times <= t) & (end_times > t))[]]) * ( + concurrent_act * )
safety_index = / ( + safety_score / (daily_resource))
total_emission = np.(.emission_per_day * .duration)
normalized_emission = / ( + total_emission / )
total_waste = np.(.waste_per_day * .duration)
normalized_waste = / ( + total_waste / )
resource_consumption = np.(peak_resources)
normalized_resource = / ( + resource_consumption / )
i (.num_act):
pred .predecessors[i]:
start_times[i] < end_times[pred]:
makespan +=
total_cost +=
objectives = np.array([makespan, total_cost, -normalized_quality, -safety_index, -normalized_emission, -normalized_resource])
constraints = np.array([total_waste])
objectives, constraints
:
():
.problem = problem
.pop_size = pop_size
.max_gen = max_gen
.cx_prob = cx_prob
.mut_prob_base = mut_prob
.dim = problem.num_act
.num_obj =
.population =
.objectives =
.constraints =
():
.population = np.random.rand(.pop_size, .dim)
.objectives = np.zeros((.pop_size, .num_obj))
.constraints = np.zeros((.pop_size, ))
i (.pop_size):
obj, cons = .problem.evaluate(.population[i])
.objectives[i] = obj
.constraints[i] = cons
():
n = (obj_vals)
dominates = np.zeros((n, n), dtype=)
i (n):
j (n):
i != j:
less_eq = np.(obj_vals[i] <= obj_vals[j])
less = np.(obj_vals[i] < obj_vals[j])
less_eq less:
dominates[i, j] =
S = [np.where(dominates[i])[] i (n)]
n_dominated = np.(dominates, axis=)
fronts = []
current_front = np.where(n_dominated == )[]
(current_front) > :
fronts.append(current_front)
next_front = []
i current_front:
j S[i]:
n_dominated[j] -=
n_dominated[j] == :
next_front.append(j)
current_front = next_front
fronts
():
n = (front_indices)
distances = np.zeros(n)
n <= :
distances[:] = np.inf
distances
num_obj = obj_vals.shape[]
m (num_obj):
sorted_idx = front_indices[np.argsort(obj_vals[front_indices, m])]
distances[sorted_idx[]] = np.inf
distances[sorted_idx[-]] = np.inf
obj_range = obj_vals[sorted_idx[-], m] - obj_vals[sorted_idx[], m]
obj_range == :
i (, n-):
idx = sorted_idx[i]
next_obj = obj_vals[sorted_idx[i+], m]
prev_obj = obj_vals[sorted_idx[i-], m]
distances[idx] += (next_obj - prev_obj) / obj_range
distances
():
np.random.rand() > .cx_prob:
parent1.copy(), parent2.copy()
child1 = parent1.copy()
child2 = parent2.copy()
i (.dim):
np.random.rand() <= :
(parent1[i] - parent2[i]) > :
y1 = (parent1[i], parent2[i])
y2 = (parent1[i], parent2[i])
rand = np.random.rand()
beta = + ( * (y1) / (y2 - y1))
alpha = - beta ** -(eta_c + )
rand <= ( / alpha):
beta_q = (rand * alpha) ** ( / (eta_c + ))
:
beta_q = ( / ( - rand * alpha)) ** ( / (eta_c + ))
c1 = * ((y1 + y2) - beta_q * (y2 - y1))
beta = + ( * ( - y2) / (y2 - y1))
alpha = - beta ** -(eta_c + )
rand <= ( / alpha):
beta_q = (rand * alpha) ** ( / (eta_c + ))
:
beta_q = ( / ( - rand * alpha)) ** ( / (eta_c + ))
c2 = * ((y1 + y2) + beta_q * (y2 - y1))
child1[i] = c1
child2[i] = c2
child1[i] < : child1[i] =
child1[i] > : child1[i] =
child2[i] < : child2[i] =
child2[i] > : child2[i] =
child1, child2
():
mut_prob = .mut_prob_base * ( + crowding) / ( + rank)
i (.dim):
np.random.rand() < mut_prob:
y = child[i]
delta1 = (y) /
delta2 = ( - y) /
rnd = np.random.rand()
mut_pow = / (eta_m + )
rnd <= :
xy = - delta1
val = * rnd + ( - * rnd) * (xy ** (eta_m + ))
delta_q = val ** mut_pow -
:
xy = - delta2
val = * ( - rnd) + * (rnd - ) * (xy ** (eta_m + ))
delta_q = - val ** mut_pow
y = y + delta_q
y < : y =
y > : y =
child[i] = y
child
():
best_ind = individual.copy()
best_obj, _ = .problem.evaluate(best_ind)
_ (steps):
new_ind = best_ind.copy()
dim_to_mutate = np.random.randint(, .dim)
new_ind[dim_to_mutate] += np.random.randn() *
new_ind[dim_to_mutate] = np.clip(new_ind[dim_to_mutate], , )
new_obj, new_cons = .problem.evaluate(new_ind)
new_obj[obj_idx_to_improve] < best_obj[obj_idx_to_improve] new_cons <= :
best_ind = new_ind
best_obj = new_obj
best_ind
():
selected = []
front fronts:
(selected) + (front) > .pop_size:
last_front_sorted = ((front, crowding_dist[front]), key= x: x[], reverse=)
indices_to_take = [idx idx, _ last_front_sorted[:.pop_size - (selected)]]
selected.extend(indices_to_take)
:
selected.extend(front)
parents_pop = population[selected]
parents_pop
():
.init_population()
all_pop = .population
all_obj = .objectives
all_cons = .constraints
gen (.max_gen):
fronts = .non_dominated_sort(all_obj)
crowding_dist = np.zeros((all_obj))
front fronts:
crowding_dist[front] = .crowding_distance(all_obj, front)
parents = .select_parents(all_pop, all_obj, fronts, crowding_dist)
offspring = []
i (, .pop_size, ):
idx1, idx2 = np.random.choice((parents), , replace=)
p1, p2 = parents[idx1], parents[idx2]
rank1 = np.where([idx front front fronts])[][]
rank2 = np.where([idx front front fronts])[][]
avg_rank = (rank1 + rank2) /
eta_c = - (avg_rank * )
eta_c = (eta_c, )
c1, c2 = .adaptive_sbx_crossover(p1, p2, eta_c)
crowding1 = crowding_dist[np.where((all_pop == p1).(axis=))[][]]
crowding2 = crowding_dist[np.where((all_pop == p2).(axis=))[][]]
c1 = .adaptive_polynomial_mutation(c1, rank1, crowding1, )
c2 = .adaptive_polynomial_mutation(c2, rank2, crowding2, )
offspring.extend([c1, c2])
offspring = np.array(offspring[:.pop_size])
gen % == gen > :
first_front = fronts[]
(first_front) > :
selected_for_ls = np.random.choice(first_front, size=, replace=)
idx selected_for_ls:
obj_to_improve = np.random.randint(, .num_obj)
improved = .local_search(all_pop[idx], obj_to_improve)
offspring = np.vstack([offspring, improved.reshape(, -)])
off_obj = np.zeros(((offspring), .num_obj))
off_cons = np.zeros(((offspring), ))
i ((offspring)):
obj, cons = .problem.evaluate(offspring[i])
off_obj[i] = obj
off_cons[i] = cons
combined_pop = np.vstack([all_pop, offspring])
combined_obj = np.vstack([all_obj, off_obj])
combined_cons = np.vstack([all_cons, off_cons])
fronts_combined = .non_dominated_sort(combined_obj)
crowding_combined = np.zeros((combined_obj))
next_pop = []
next_obj = []
next_cons = []
front fronts_combined:
(next_pop) + (front) <= .pop_size:
next_pop.extend(combined_pop[front])
next_obj.extend(combined_obj[front])
next_cons.extend(combined_cons[front])
:
last_front_indices = front
crowding_last = .crowding_distance(combined_obj, last_front_indices)
sorted_last = ((last_front_indices, crowding_last), key= x: x[], reverse=)
needed = .pop_size - (next_pop)
idx, _ sorted_last[:needed]:
next_pop.append(combined_pop[idx])
next_obj.append(combined_obj[idx])
next_cons.append(combined_cons[idx])
all_pop = np.array(next_pop)
all_obj = np.array(next_obj)
all_cons = np.array(next_cons)
first_front_indices = .non_dominated_sort(all_obj)[]
avg_obj_first = np.mean(all_obj[first_front_indices], axis=)
gen % == :
()
final_front = .non_dominated_sort(all_obj)[]
pareto_solutions = all_pop[final_front]
pareto_objectives = all_obj[final_front]
pareto_solutions, pareto_objectives

