accml.app.tune package

Subpackages

Submodules

accml.app.tune.interface module

class accml.app.tune.interface.TuneControllerInterface(*, mexec, oracle, policy, num_readings, wait_after_set, wait_between_samples, logger)[source]

Bases: ControllerInterface

Parameters:

accml.app.tune.model module

class accml.app.tune.model.MeasuredTuneResponse(col)[source]

Bases: object

Parameters:

col (Sequence[MeasuredTuneResponsePerPowerConverter])

col: Sequence[MeasuredTuneResponsePerPowerConverter]
get(name)[source]
Return type:

MeasuredTuneResponsePerPowerConverter

class accml.app.tune.model.MeasuredTuneResponseItem(setpoint, x, y, repetition=-1)[source]

Bases: object

Todo

Compare it to tune result model Which should stary

Parameters:
repetition: int | None = -1

which repetition of this value it was measured useful when data are collected from “free running” devices. The first one needs to be ignored then

setpoint: float

what is this value?

Type:

todo

x: float

horizontal plane

y: float

vertical plane

class accml.app.tune.model.MeasuredTuneResponsePerPowerConverter(pc_name, col)[source]

Bases: object

Parameters:
col: Sequence[MeasuredTuneResponseItem]
pc_name: str
class accml.app.tune.model.RandomVariableMomenta(mean, std)[source]

Bases: object

Parameters:
mean: float
std: float
class accml.app.tune.model.TuneResponse(pc_name, x, y)[source]

Bases: object

Parameters:
pc_name: str
x: RandomVariableMomenta
y: RandomVariableMomenta
class accml.app.tune.model.TuneResponseCollection(col)[source]

Bases: object

Parameters:

col (Sequence[TuneResponse])

col: Sequence[TuneResponse]
get(name)[source]
Return type:

TuneResponse

Parameters:

name (str)

accml.app.tune.oracle module

class accml.app.tune.oracle.TuneOracle(*, target, col)[source]

Bases: Oracle

Tune oracle as a pure integral proportional controller

Parameters:
ask(inp)[source]

Returns the values to apply to the correction

Return type:

(Tune, Dict[str, float])

Parameters:

inp (Tune)

accml.app.tune.policy module

class accml.app.tune.policy.TunePolicy(scale=1.0)[source]

Bases: PolicyBase

Parameters:

scale (float)

step(current_state, diff, step)[source]

Here one could adjustments to the forecast

  • if a large step has to be made, one could rather only do a small step as there are not too many non linarities in the model

  • for low alpha mode, one could step more cautiously then in normal mode

Return type:

Dict[str, float]

Parameters:

accml.app.tune.preprocess_simple_storage_data module

accml.app.tune.preprocess_simple_storage_data.data_to_model(data)[source]
Return type:

MeasuredTuneResponse

Parameters:

data (Result)

accml.app.tune.tune_correction module

async accml.app.tune.tune_correction.tune_correction(dm, tune_target, *, n_iterations=3, n_samples=2, wait_after_set=0.5, wait_between_sample=0.1, mexec)[source]

Todo

  • reference / target tune from caller

  • consider to provide fine tuning from the outside

  • only connect to the actuators actually required

Parameters:

accml.app.tune.tune_correction_controller module

class accml.app.tune.tune_correction_controller.TuneCorrectionController(*, mexec, oracle, policy, n_samples, wait_after_set, wait_between_samples, logger=<Logger accml (WARNING)>)[source]

Bases: TuneControllerInterface

A simple I type controller

It only implements the integral part of the controller:

i.e. react to the change

Parameters:
async continuous(*, read_commands, set_commands, n_steps=None)[source]
Parameters:
  • read_commands (Sequence[ReadCommand]) – commands to retrieve the observed positions

  • set_commands (Sequence[Command]) – commands to set the actuators. Note that a copy of the command will be made and the value will be adapted

  • n_steps – if set to None, run forever, otherwise run maximum number of steps and stop then

Discussion:

Should “read commands” and “set commands” be made available already at init?

Read commands tell the controller how to get to an “observable” state. “set_commands” allow the controller to e.g. probe the used measurement/cmd execution engine if these commands are understood and available.

The commands actually set will be typically produced by the “policy”. An alternate way would be that the policy changes the step to be taken. The commands to be executed are then generated by the controller.

async one_step(read_commands, set_commands)[source]
Parameters:
accml.app.tune.tune_correction_controller.compute_correction_state(*, oracle, policy, current_state, t_tune, logger)[source]
Return type:

Dict[str, float]

Parameters:
accml.app.tune.tune_correction_controller.compute_stat_for_oracle(inp)[source]
Return type:

CorrectionStat

Parameters:

inp (Dict[str, float])

accml.app.tune.tune_correction_controller.compute_stat_for_transactional_command(inp)[source]
Return type:

CorrectionStat

Parameters:

inp (TransactionCommand)

accml.app.tune.tune_correction_controller.correction_action_to_commands(correction_actions, set_commands)[source]
Return type:

TransactionCommand

Parameters:

accml.app.tune.tune_measurement module

async accml.app.tune.tune_measurement.measure_tune_response(*, detectors, quadrupole_pc_names, measurement_values, mexec=None, **kwargs)[source]

Todo

rename detectors to read_detectors? These are rather commands than real detectors

Parameters:

accml.app.tune.tune_response_analysis module

accml.app.tune.tune_response_analysis.fit_line(indep, dep)[source]
Return type:

RandomVariableMomenta

Parameters:

Todo

need to compute the standard deviation find out which algorithm to be used

orbit response profits from leastsquare lstsg so perhaps also to use it here

accml.app.tune.tune_response_analysis.fit_one_power_converter(data)[source]
Parameters:

data (MeasuredTuneResponsePerPowerConverter)

accml.app.tune.tune_response_analysis.tune_response_analysis(prep_data)[source]
Return type:

TuneResponseCollection

Parameters:

prep_data (MeasuredTuneResponse)

Module contents