Skip to content

cfg_loader

MetaModel dataclass

Class that manages multiple models and datasets at once. This enables computing a single model for multiple datasets as well as multiple models with shared parameters.

Attributes:

Name Type Description
global_comm Comm | Intracomm

The MPI communicator used for all datasets.

models tuple[Model, ...]

Tuple of models to fit together.

datasets tuple[DataSet]

Tuple of datasets to fit together.

parameter_map tuple[tuple[int, ...], ...]

Structure to map the parameters held in MetaModel to each individual Model instance. Thie ith entry is a tuple that indexes MetaModel.parameters to get the parameters for the ith Model in MetaModel.models

model_map tuple[tuple[int, ...], ...]

Structure to map models to datasets. The jth entry is an n_model length tuple in which the ith entry lists the indices of MetaModel.models used by MetaModel.datasets[j].

metadata_map tuple[tuple[tuple[int, ...], ...], ...]

Structure to map what metadata to apply to which model. The jth entry is an n_model length tuple in which the ith entry lists the indices of MetaModel.datasets[j].metadata to apply to MetaModel.models[i].

parameters Array

The parameters of this MetaModel.

errors Array

The error currently associated with each element of MetaModel.parameters.

chisq Array

The log-likelihood of the current state of the MetaModel.

Source code in witch/containers/metamodel.py
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
@register_pytree_node_class
@dataclass
class MetaModel:
    """
    Class that manages multiple models and datasets at once.
    This enables computing a single model for multiple datasets
    as well as multiple models with shared parameters.

    Attributes
    ----------
    global_comm : MPI.Comm | MPI.Intracomm
        The MPI communicator used for all datasets.
    models : tuple[Model, ...]
        Tuple of models to fit together.
    datasets : tuple[DataSet]
        Tuple of datasets to fit together.
    parameter_map : tuple[tuple[int, ...], ...]
        Structure to map the parameters held in `MetaModel`
        to each individual `Model` instance.
        Thie `i`th entry is a tuple that indexes `MetaModel.parameters`
        to get the parameters for the `i`th `Model` in `MetaModel.models`
    model_map : tuple[tuple[int, ...], ...]
        Structure to map models to datasets.
        The `j`th entry is an `n_model` length tuple in which the
        `i`th entry lists the indices of `MetaModel.models` used by
        `MetaModel.datasets[j]`.
    metadata_map : tuple[tuple[tuple[int, ...], ...], ...]
        Structure to map what metadata to apply to which model.
        The `j`th entry is an `n_model` length tuple in which the
        `i`th entry lists the indices of `MetaModel.datasets[j].metadata`
        to apply to `MetaModel.models[i]`.
    parameters : jax.Array
        The parameters of this `MetaModel`.
    errors : jax.Array
        The error currently associated with each element of `MetaModel.parameters`.
    chisq : jax.Array
        The log-likelihood of the current state of the `MetaModel`.
    """

    global_comm: MPI.Comm | MPI.Intracomm | wu.NullComm
    models: tuple[Model, ...]
    datasets: tuple[DataSet, ...]
    parameter_map: tuple[tuple[int, ...], ...]
    model_map: tuple[tuple[int, ...], ...]
    metadata_map: tuple[tuple[tuple[int, ...], ...], ...]
    parameters: jax.Array
    errors: jax.Array
    chisq: jax.Array

    def __repr__(self) -> str:
        reprs = [str(model) for model in self.models]
        rep = []
        idx = 0
        for r in reprs:
            message = r.split("\n")
            rep += message[idx:-1]
        rep = "\n".join(rep)
        rep += f"\nchisq is {self.chisq}"
        return rep

    def check_compatibility(self, other: Self) -> bool:
        """
        Check if another `MetaModel` instance is compatible with this one.
        Used for checkpointing.

        Arguments
        ---------
        other : MetaModel
            The `MetaModel` instance to check compatibility with.

        Returns
        -------
        compatible : bool
            True if compatible,
            False if not.
        """
        # Check models
        if len(self.models) != len(other.models):
            return False
        for sm, om in zip(self.models, other.models):
            if not sm.check_compatibility(om):
                return False

        # Check datasets (just names for now)
        if len(self.datasets) != len(other.datasets):
            return False
        for sd, od in zip(self.datasets, other.datasets):
            if sd.name != od.name:
                return False

        # Check mappings
        if self.parameter_map != other.parameter_map:
            return False
        if self.model_map != other.metadata_map:
            return False
        if self.metadata_map != other.metadata_map:
            return False

        return True

    @cached_property
    def par_names(self) -> tuple[str]:
        """
        Get the parameter names in the same order as `self.parameters`.

        Returns
        -------
        par_name : jax.Array
            String array of parameter names.
        """
        par_names = np.zeros_like(self.parameters, dtype="U128")
        for model, par_map in zip(self.models, self.parameter_map):
            par_names[list(par_map)] = np.array(model.par_names, dtype="U128")

        return tuple(par_names.tolist())

    @property
    def to_fit(self) -> jax.Array:
        """
        Get which parameters will  be fit this round in the same order `self.parameters`.

        Returns
        -------
        to_fit : jax.Array
            Boolean array that is True for parameters that will be fit this round.
        """
        to_fit = jnp.zeros_like(self.parameters, dtype=bool)
        for model, par_map in zip(self.models, self.parameter_map):
            to_fit = to_fit.at[jnp.array(par_map)].set(jnp.array(model.to_fit))

        return to_fit

    @cached_property
    def to_fit_ever(self) -> jax.Array:
        """
        Get which parameters will ever be fit in the same order `self.parameters`.

        Returns
        -------
        to_fit_ever : jax.Array
            Boolean array that is True for parameters that will be fit.
        """
        to_fit_ever = jnp.zeros_like(self.parameters, dtype=bool)
        for model, par_map in zip(self.models, self.parameter_map):
            to_fit_ever = to_fit_ever.at[jnp.array(par_map)].set(
                jnp.array(model.to_fit_ever)
            )

        return to_fit_ever

    @cached_property
    def priors(self) -> tuple[jax.Array, jax.Array]:
        """
        Get priors in the same order as `self.parameters`.

        Returns
        -------
        priors_low : jax.Array
            Lower bound of prior ranges.
        priors_high : jax.Array
            Higher bound of prior ranges.
        """
        priors_low = jnp.zeros_like(self.parameters)
        priors_high = jnp.zeros_like(self.parameters)
        for model, par_map in zip(self.models, self.parameter_map):
            priors_low = priors_low.at[jnp.array(par_map)].set(model.priors[0])
            priors_high = priors_high.at[jnp.array(par_map)].set(model.priors[1])

        return priors_low, priors_high

    @cached_property
    def cur_round(self) -> jax.Array:
        """
        Get the current round of fitting.

        Returns
        -------
        cur_round : int
            The current fitting round.
        """
        cur_rounds = jnp.array([model.cur_round for model in self.models]).ravel()
        cur_round = cur_rounds[0]

        return cur_round

    @cached_property
    def n_rounds(self) -> jax.Array:
        """
        Get the total rounds of fitting.

        Returns
        -------
        n_rounds : int
            The total fitting rounds.
        """
        n_rounds_all = jnp.array([model.n_rounds for model in self.models]).ravel()
        n_rounds = n_rounds_all[0]

        return n_rounds

    def model_grid(self, dataset_ind: int) -> jax.Array:
        """
        Get the model for a dataset on the computed grid.
        This currently assumes that all models have the same grid.
        This does apply metadata (ie. beam convolution+prefactor).

        Parameters
        ----------
        dataset_ind : int
            The index of the dataset in `self.datasets` to use.

        Returns
        -------
        model_grid : jax.Array
            The model on the computed grid.
        """
        dataset_ind = 0
        m_map = self.model_map[dataset_ind]
        md_map = self.metadata_map[dataset_ind]
        dset = self.datasets[dataset_ind]
        proj = jnp.zeros_like(self.models[m_map[0]].model)
        for i in m_map:
            model = self.models[i]
            md = md_map[i]
            ip = model.model
            for md_idx in md:
                ip = dset.metadata[md_idx].apply(ip)
            proj = proj.at[:].add(ip)
        return proj

    def model_proj(self, dataset_ind: int, datavec_ind: int) -> jax.Array:
        """
        Project the models held in the metamodel to some data in a dataset.

        Parameters
        ----------
        dataset_ind : int
            The index of the dataset in `self.datasets` to use.
        datavec_ind : int
            The index of the data in `self.datasets[dataset_ind].datavec` to use.

        Returns
        -------
        model_proj : jax.Array
            The metamodel projected into an array that matches the shape of
            `self.datasets[dataset_ind].datavec[datavec_ind]`.
        """
        dset = self.datasets[dataset_ind]
        m_map = self.model_map[dataset_ind]
        md_map = self.metadata_map[dataset_ind]
        data = dset.datavec[datavec_ind]
        if dset.mode == "tod":
            x = data.x
            y = data.y
        else:
            x, y = data.xy
        x = x * wu.rad_to_arcsec
        y = y * wu.rad_to_arcsec
        proj = jnp.zeros_like(x)
        for i in m_map:
            model = self.models[i]
            md = md_map[i]
            ip = model.model
            for md_idx in md:
                ip = dset.metadata[md_idx].apply(ip)
            _proj = _project(ip, x, y, model.xyz)
            for md_idx in md:
                _proj = dset.metadata[md_idx].apply_proj(_proj)
            proj = proj.at[:].add(_proj)
        return proj

    def model_grad_proj(self, dataset_ind: int, datavec_ind: int) -> jax.Array:
        """
        Project the models held in the metamodel to some data in a dataset.

        Parameters
        ----------
        dataset_ind : int
            The index of the dataset in `self.datasets` to use.
        datavec_ind : int
            The index of the data in `self.datasets[dataset_ind].datavec` to use.

        Returns
        -------
        model_grad_proj : jax.Array
            The metamodel gradients projected into an array with `len(self.parameters)` elements
            where each element matches the shape of `self.datasets[dataset_ind].datavec[datavec_ind]`.
        """
        dset = self.datasets[dataset_ind]
        m_map = self.model_map[dataset_ind]
        md_map = self.metadata_map[dataset_ind]
        data = dset.datavec[datavec_ind]
        if dset.mode == "tod":
            x = data.x
            y = data.y
        else:
            x, y = data.xy
        x = x * wu.rad_to_arcsec
        y = y * wu.rad_to_arcsec
        proj_grad = jnp.zeros(self.parameters.shape + x.shape)
        for i in m_map:
            model = self.models[i]
            md = md_map[i]
            par_map = self.parameter_map[i]
            ip_grad = model.model_grad[1]
            for md_idx in md:
                ip_grad = dset.metadata[md_idx].apply_grad(ip_grad)
            _proj_grad = _project_vectorized(ip_grad, x, y, model.xyz)
            for md_idx in md:
                _proj_grad = dset.metadata[md_idx].apply_grad_proj(_proj_grad)
            proj_grad = proj_grad.at[jnp.array(par_map)].add(_proj_grad)
        return proj_grad

    def get_dataset_ind(self, dset_name: str) -> int:
        """
        Get the index of a dataset

        Parameters
        ----------
        dset_name : str
            The name of the dataset to find

        Returns
        -------
        dataset_ind : int
            The index of the dataset.
        """
        dataset_ind = -1
        for i, dset in enumerate(self.datasets):
            if dset_name == dset.name:
                dataset_ind = i
                break
        return dataset_ind

    def update(self, vals: jax.Array, errs: jax.Array, chisq: jax.Array) -> Self:
        """
        Update the parameter values and errors as well as the model chi-squared
        for all models in the metamodel.

        Parameters
        ----------
        vals : jax.Array
            The new parameter values.
            Should be in the same order as `pars`.
        errs : jax.Array
            The new parameter errors.
            Should be in the same order as `pars`.
        chisq : jax.Array
            The new chi-squared.
            Should be a scalar float array.

        Returns
        -------
        updated : MetaModel
            The updated metamodel.
            While nominally the metamodel will update in place, returning it
            alows us to use this function in JITed functions.
        """
        self.parameters = vals
        self.errors = errs
        self.chisq = chisq
        self.models = tuple(
            deepcopy(model).update(
                vals[jnp.array(par_map)], errs[jnp.array(par_map)], chisq
            )
            for model, par_map in zip(self.models, self.parameter_map)
        )

        return copy(self)

    def add_round(self, to_fit: jax.Array) -> Self:
        """
        Add an additional round to the metamodel.

        Parameters
        ----------
        to_fit : jax.Array
            Boolean array denoting which parameters to fit this round.
            Should be in the same order as `self.parameters`.

        Returns
        -------
        updated : MetaModel
            The updated metamodel with the new round.
            While nominally the model will update in place, returning it
            alows us to use this function in JITed functions.
        """
        self.__dict__.pop("n_rounds", None)
        self.__dict__.pop("to_fit", None)
        self.__dict__.pop("to_fit_ever", None)
        self.models = tuple(
            model.add_round(to_fit[jnp.array(par_map)])
            for model, par_map in zip(self.models, self.parameter_map)
        )

        return self

    def set_round(self, new_round: int) -> Self:
        """
        Set the round of the metamodel.

        Parameters
        ----------
        new_round : int
            The number of the round to go to.

        Returns
        -------
        updated : MetaModel
            The updated metamodel with the round updated.
            While nominally the model will update in place, returning it
            alows us to use this function in JITed functions.
        """
        if new_round > self.n_rounds or new_round < 0:
            raise ValueError("Trying to set a round that doesn't exist!")
        self.__dict__.pop("cur_round", None)
        self.__dict__.pop("to_fit", None)
        for model in self.models:
            model.cur_round = new_round

        return self

    def save(self, path: str, state: dict = {}):
        """
        Serialize the model to a file with dill.

        Parameters
        ----------
        path : str
            The file to save to.
            Does not check to see if the path is valid.
        state : dict
            Dictionary of metadata to understand the state when we are saving.
        """
        datavecs = []
        comms = []
        gcomm = self.global_comm
        for dataset in self.datasets:
            datavecs += [dataset.datavec]
            comms += [dataset.global_comm]
            dataset.datavec = None
            dataset.global_comm = wu.NullComm()
        self.global_comm = wu.NullComm()
        with open(path, "wb") as f:
            dill.dump((self, state), f)
        for dataset, datavec, comm in zip(self.datasets, datavecs, comms):
            dataset.datavec = datavec
            dataset.global_comm = comm
        self.global_comm = gcomm

    @classmethod
    def load(cls, path: str) -> tuple[Self, dict]:
        """
        Load the model from a file with dill.

        Parameters
        ----------
        path : str
            The path to the saved model.
            Does not check to see if the path is valid.

        Returns
        -------
        model : MetaModel
            The loaded model.
        state : dict
            Dictionary of metadata to understand the state from when we saved.
        """
        with open(path, "rb") as f:
            return dill.load(f)

    def remove_structs(self, cfg):
        """
        Create a new metamodel with marked structures removed.

        Parameters
        ----------
        cfg : dict
            The config loaded into a dict.

        Returns
        -------
        metamodel : MetaModel
            The metamodel described with structures removed.
        """
        new_metamodel = self.__class__.from_config(
            self.global_comm, cfg, self.datasets, True
        )
        new_metamodel = new_metamodel.update(self.parameters, self.errors, self.chisq)
        return new_metamodel

    @classmethod
    def from_config(
        cls,
        global_comm: MPI.Comm | MPI.Intracomm | wu.NullComm,
        cfg: dict,
        datasets: tuple[DataSet, ...],
        remove_structs: bool = False,
    ) -> Self:
        """
        Create an instance of metamodel from a witcher config.

        Parameters
        ----------
        global_comm: MPI.Comm | MPI.Intracomm | wu.NullComm,
            The communicator for this metamodel.
        cfg : dict
            The config loaded into a dict.
        datasets : tuple[DataSet]
            The datasets to associate with this model
        remove_structs : bool, default: False
            If True don't include structures marked for removal.

        Returns
        -------
        metamodel : MetaModel
            The metamodel described by the config.
        """
        metacfg = cfg.get("metamodel", {})

        # First lets load all the models
        mlist = metacfg.get("models", ["model"])
        models = tuple(
            Model.from_cfg(cfg, model_field, False, remove_structs)
            for model_field in mlist
        )
        model_ind = {model_field: i for i, model_field in enumerate(mlist)}

        # Now lets make the model map
        mmap = metacfg.get("model_map", {dset.name: mlist for dset in datasets})
        model_map = tuple(
            tuple(
                sorted(tuple(model_ind[model_field] for model_field in mmap[dset.name]))
            )
            for dset in datasets
        )

        par_map, pars, errs = _compute_par_map_and_pars(metacfg, models)

        metadata_map = _compute_metadata_map(models, datasets)

        return cls(
            global_comm,
            models,
            datasets,
            par_map,
            model_map,
            metadata_map,
            pars,
            errs,
            models[0].chisq,
        )

    # Functons for making this a pytree
    # Don't call this on your own
    def tree_flatten(self) -> tuple[tuple, tuple]:
        children = (
            self.models,
            self.datasets,
            self.parameters,
            self.errors,
            self.chisq,
        )
        aux_data = (
            self.global_comm,
            self.model_map,
            self.parameter_map,
            self.metadata_map,
        )

        return (children, aux_data)

    @classmethod
    def tree_unflatten(cls, aux_data, children) -> Self:
        global_comm, model_map, parameter_map, metadata_map = aux_data
        models, datasets, parameters, errors, chisq = children

        return cls(
            global_comm,
            models,
            datasets,
            parameter_map,
            model_map,
            metadata_map,
            parameters,
            errors,
            chisq,
        )

cur_round cached property

Get the current round of fitting.

Returns:

Name Type Description
cur_round int

The current fitting round.

n_rounds cached property

Get the total rounds of fitting.

Returns:

Name Type Description
n_rounds int

The total fitting rounds.

par_names cached property

Get the parameter names in the same order as self.parameters.

Returns:

Name Type Description
par_name Array

String array of parameter names.

priors cached property

Get priors in the same order as self.parameters.

Returns:

Name Type Description
priors_low Array

Lower bound of prior ranges.

priors_high Array

Higher bound of prior ranges.

to_fit property

Get which parameters will be fit this round in the same order self.parameters.

Returns:

Name Type Description
to_fit Array

Boolean array that is True for parameters that will be fit this round.

to_fit_ever cached property

Get which parameters will ever be fit in the same order self.parameters.

Returns:

Name Type Description
to_fit_ever Array

Boolean array that is True for parameters that will be fit.

add_round(to_fit)

Add an additional round to the metamodel.

Parameters:

Name Type Description Default
to_fit Array

Boolean array denoting which parameters to fit this round. Should be in the same order as self.parameters.

required

Returns:

Name Type Description
updated MetaModel

The updated metamodel with the new round. While nominally the model will update in place, returning it alows us to use this function in JITed functions.

Source code in witch/containers/metamodel.py
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
def add_round(self, to_fit: jax.Array) -> Self:
    """
    Add an additional round to the metamodel.

    Parameters
    ----------
    to_fit : jax.Array
        Boolean array denoting which parameters to fit this round.
        Should be in the same order as `self.parameters`.

    Returns
    -------
    updated : MetaModel
        The updated metamodel with the new round.
        While nominally the model will update in place, returning it
        alows us to use this function in JITed functions.
    """
    self.__dict__.pop("n_rounds", None)
    self.__dict__.pop("to_fit", None)
    self.__dict__.pop("to_fit_ever", None)
    self.models = tuple(
        model.add_round(to_fit[jnp.array(par_map)])
        for model, par_map in zip(self.models, self.parameter_map)
    )

    return self

check_compatibility(other)

Check if another MetaModel instance is compatible with this one. Used for checkpointing.

Arguments

other : MetaModel The MetaModel instance to check compatibility with.

Returns:

Name Type Description
compatible bool

True if compatible, False if not.

Source code in witch/containers/metamodel.py
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
def check_compatibility(self, other: Self) -> bool:
    """
    Check if another `MetaModel` instance is compatible with this one.
    Used for checkpointing.

    Arguments
    ---------
    other : MetaModel
        The `MetaModel` instance to check compatibility with.

    Returns
    -------
    compatible : bool
        True if compatible,
        False if not.
    """
    # Check models
    if len(self.models) != len(other.models):
        return False
    for sm, om in zip(self.models, other.models):
        if not sm.check_compatibility(om):
            return False

    # Check datasets (just names for now)
    if len(self.datasets) != len(other.datasets):
        return False
    for sd, od in zip(self.datasets, other.datasets):
        if sd.name != od.name:
            return False

    # Check mappings
    if self.parameter_map != other.parameter_map:
        return False
    if self.model_map != other.metadata_map:
        return False
    if self.metadata_map != other.metadata_map:
        return False

    return True

from_config(global_comm, cfg, datasets, remove_structs=False) classmethod

Create an instance of metamodel from a witcher config.

Parameters:

Name Type Description Default
global_comm Comm | Intracomm | NullComm

The communicator for this metamodel.

required
cfg dict

The config loaded into a dict.

required
datasets tuple[DataSet]

The datasets to associate with this model

required
remove_structs bool

If True don't include structures marked for removal.

False

Returns:

Name Type Description
metamodel MetaModel

The metamodel described by the config.

Source code in witch/containers/metamodel.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
@classmethod
def from_config(
    cls,
    global_comm: MPI.Comm | MPI.Intracomm | wu.NullComm,
    cfg: dict,
    datasets: tuple[DataSet, ...],
    remove_structs: bool = False,
) -> Self:
    """
    Create an instance of metamodel from a witcher config.

    Parameters
    ----------
    global_comm: MPI.Comm | MPI.Intracomm | wu.NullComm,
        The communicator for this metamodel.
    cfg : dict
        The config loaded into a dict.
    datasets : tuple[DataSet]
        The datasets to associate with this model
    remove_structs : bool, default: False
        If True don't include structures marked for removal.

    Returns
    -------
    metamodel : MetaModel
        The metamodel described by the config.
    """
    metacfg = cfg.get("metamodel", {})

    # First lets load all the models
    mlist = metacfg.get("models", ["model"])
    models = tuple(
        Model.from_cfg(cfg, model_field, False, remove_structs)
        for model_field in mlist
    )
    model_ind = {model_field: i for i, model_field in enumerate(mlist)}

    # Now lets make the model map
    mmap = metacfg.get("model_map", {dset.name: mlist for dset in datasets})
    model_map = tuple(
        tuple(
            sorted(tuple(model_ind[model_field] for model_field in mmap[dset.name]))
        )
        for dset in datasets
    )

    par_map, pars, errs = _compute_par_map_and_pars(metacfg, models)

    metadata_map = _compute_metadata_map(models, datasets)

    return cls(
        global_comm,
        models,
        datasets,
        par_map,
        model_map,
        metadata_map,
        pars,
        errs,
        models[0].chisq,
    )

get_dataset_ind(dset_name)

Get the index of a dataset

Parameters:

Name Type Description Default
dset_name str

The name of the dataset to find

required

Returns:

Name Type Description
dataset_ind int

The index of the dataset.

Source code in witch/containers/metamodel.py
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
def get_dataset_ind(self, dset_name: str) -> int:
    """
    Get the index of a dataset

    Parameters
    ----------
    dset_name : str
        The name of the dataset to find

    Returns
    -------
    dataset_ind : int
        The index of the dataset.
    """
    dataset_ind = -1
    for i, dset in enumerate(self.datasets):
        if dset_name == dset.name:
            dataset_ind = i
            break
    return dataset_ind

load(path) classmethod

Load the model from a file with dill.

Parameters:

Name Type Description Default
path str

The path to the saved model. Does not check to see if the path is valid.

required

Returns:

Name Type Description
model MetaModel

The loaded model.

state dict

Dictionary of metadata to understand the state from when we saved.

Source code in witch/containers/metamodel.py
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
@classmethod
def load(cls, path: str) -> tuple[Self, dict]:
    """
    Load the model from a file with dill.

    Parameters
    ----------
    path : str
        The path to the saved model.
        Does not check to see if the path is valid.

    Returns
    -------
    model : MetaModel
        The loaded model.
    state : dict
        Dictionary of metadata to understand the state from when we saved.
    """
    with open(path, "rb") as f:
        return dill.load(f)

model_grad_proj(dataset_ind, datavec_ind)

Project the models held in the metamodel to some data in a dataset.

Parameters:

Name Type Description Default
dataset_ind int

The index of the dataset in self.datasets to use.

required
datavec_ind int

The index of the data in self.datasets[dataset_ind].datavec to use.

required

Returns:

Name Type Description
model_grad_proj Array

The metamodel gradients projected into an array with len(self.parameters) elements where each element matches the shape of self.datasets[dataset_ind].datavec[datavec_ind].

Source code in witch/containers/metamodel.py
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
def model_grad_proj(self, dataset_ind: int, datavec_ind: int) -> jax.Array:
    """
    Project the models held in the metamodel to some data in a dataset.

    Parameters
    ----------
    dataset_ind : int
        The index of the dataset in `self.datasets` to use.
    datavec_ind : int
        The index of the data in `self.datasets[dataset_ind].datavec` to use.

    Returns
    -------
    model_grad_proj : jax.Array
        The metamodel gradients projected into an array with `len(self.parameters)` elements
        where each element matches the shape of `self.datasets[dataset_ind].datavec[datavec_ind]`.
    """
    dset = self.datasets[dataset_ind]
    m_map = self.model_map[dataset_ind]
    md_map = self.metadata_map[dataset_ind]
    data = dset.datavec[datavec_ind]
    if dset.mode == "tod":
        x = data.x
        y = data.y
    else:
        x, y = data.xy
    x = x * wu.rad_to_arcsec
    y = y * wu.rad_to_arcsec
    proj_grad = jnp.zeros(self.parameters.shape + x.shape)
    for i in m_map:
        model = self.models[i]
        md = md_map[i]
        par_map = self.parameter_map[i]
        ip_grad = model.model_grad[1]
        for md_idx in md:
            ip_grad = dset.metadata[md_idx].apply_grad(ip_grad)
        _proj_grad = _project_vectorized(ip_grad, x, y, model.xyz)
        for md_idx in md:
            _proj_grad = dset.metadata[md_idx].apply_grad_proj(_proj_grad)
        proj_grad = proj_grad.at[jnp.array(par_map)].add(_proj_grad)
    return proj_grad

model_grid(dataset_ind)

Get the model for a dataset on the computed grid. This currently assumes that all models have the same grid. This does apply metadata (ie. beam convolution+prefactor).

Parameters:

Name Type Description Default
dataset_ind int

The index of the dataset in self.datasets to use.

required

Returns:

Name Type Description
model_grid Array

The model on the computed grid.

Source code in witch/containers/metamodel.py
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
def model_grid(self, dataset_ind: int) -> jax.Array:
    """
    Get the model for a dataset on the computed grid.
    This currently assumes that all models have the same grid.
    This does apply metadata (ie. beam convolution+prefactor).

    Parameters
    ----------
    dataset_ind : int
        The index of the dataset in `self.datasets` to use.

    Returns
    -------
    model_grid : jax.Array
        The model on the computed grid.
    """
    dataset_ind = 0
    m_map = self.model_map[dataset_ind]
    md_map = self.metadata_map[dataset_ind]
    dset = self.datasets[dataset_ind]
    proj = jnp.zeros_like(self.models[m_map[0]].model)
    for i in m_map:
        model = self.models[i]
        md = md_map[i]
        ip = model.model
        for md_idx in md:
            ip = dset.metadata[md_idx].apply(ip)
        proj = proj.at[:].add(ip)
    return proj

model_proj(dataset_ind, datavec_ind)

Project the models held in the metamodel to some data in a dataset.

Parameters:

Name Type Description Default
dataset_ind int

The index of the dataset in self.datasets to use.

required
datavec_ind int

The index of the data in self.datasets[dataset_ind].datavec to use.

required

Returns:

Name Type Description
model_proj Array

The metamodel projected into an array that matches the shape of self.datasets[dataset_ind].datavec[datavec_ind].

Source code in witch/containers/metamodel.py
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def model_proj(self, dataset_ind: int, datavec_ind: int) -> jax.Array:
    """
    Project the models held in the metamodel to some data in a dataset.

    Parameters
    ----------
    dataset_ind : int
        The index of the dataset in `self.datasets` to use.
    datavec_ind : int
        The index of the data in `self.datasets[dataset_ind].datavec` to use.

    Returns
    -------
    model_proj : jax.Array
        The metamodel projected into an array that matches the shape of
        `self.datasets[dataset_ind].datavec[datavec_ind]`.
    """
    dset = self.datasets[dataset_ind]
    m_map = self.model_map[dataset_ind]
    md_map = self.metadata_map[dataset_ind]
    data = dset.datavec[datavec_ind]
    if dset.mode == "tod":
        x = data.x
        y = data.y
    else:
        x, y = data.xy
    x = x * wu.rad_to_arcsec
    y = y * wu.rad_to_arcsec
    proj = jnp.zeros_like(x)
    for i in m_map:
        model = self.models[i]
        md = md_map[i]
        ip = model.model
        for md_idx in md:
            ip = dset.metadata[md_idx].apply(ip)
        _proj = _project(ip, x, y, model.xyz)
        for md_idx in md:
            _proj = dset.metadata[md_idx].apply_proj(_proj)
        proj = proj.at[:].add(_proj)
    return proj

remove_structs(cfg)

Create a new metamodel with marked structures removed.

Parameters:

Name Type Description Default
cfg dict

The config loaded into a dict.

required

Returns:

Name Type Description
metamodel MetaModel

The metamodel described with structures removed.

Source code in witch/containers/metamodel.py
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
def remove_structs(self, cfg):
    """
    Create a new metamodel with marked structures removed.

    Parameters
    ----------
    cfg : dict
        The config loaded into a dict.

    Returns
    -------
    metamodel : MetaModel
        The metamodel described with structures removed.
    """
    new_metamodel = self.__class__.from_config(
        self.global_comm, cfg, self.datasets, True
    )
    new_metamodel = new_metamodel.update(self.parameters, self.errors, self.chisq)
    return new_metamodel

save(path, state={})

Serialize the model to a file with dill.

Parameters:

Name Type Description Default
path str

The file to save to. Does not check to see if the path is valid.

required
state dict

Dictionary of metadata to understand the state when we are saving.

{}
Source code in witch/containers/metamodel.py
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
def save(self, path: str, state: dict = {}):
    """
    Serialize the model to a file with dill.

    Parameters
    ----------
    path : str
        The file to save to.
        Does not check to see if the path is valid.
    state : dict
        Dictionary of metadata to understand the state when we are saving.
    """
    datavecs = []
    comms = []
    gcomm = self.global_comm
    for dataset in self.datasets:
        datavecs += [dataset.datavec]
        comms += [dataset.global_comm]
        dataset.datavec = None
        dataset.global_comm = wu.NullComm()
    self.global_comm = wu.NullComm()
    with open(path, "wb") as f:
        dill.dump((self, state), f)
    for dataset, datavec, comm in zip(self.datasets, datavecs, comms):
        dataset.datavec = datavec
        dataset.global_comm = comm
    self.global_comm = gcomm

set_round(new_round)

Set the round of the metamodel.

Parameters:

Name Type Description Default
new_round int

The number of the round to go to.

required

Returns:

Name Type Description
updated MetaModel

The updated metamodel with the round updated. While nominally the model will update in place, returning it alows us to use this function in JITed functions.

Source code in witch/containers/metamodel.py
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
def set_round(self, new_round: int) -> Self:
    """
    Set the round of the metamodel.

    Parameters
    ----------
    new_round : int
        The number of the round to go to.

    Returns
    -------
    updated : MetaModel
        The updated metamodel with the round updated.
        While nominally the model will update in place, returning it
        alows us to use this function in JITed functions.
    """
    if new_round > self.n_rounds or new_round < 0:
        raise ValueError("Trying to set a round that doesn't exist!")
    self.__dict__.pop("cur_round", None)
    self.__dict__.pop("to_fit", None)
    for model in self.models:
        model.cur_round = new_round

    return self

update(vals, errs, chisq)

Update the parameter values and errors as well as the model chi-squared for all models in the metamodel.

Parameters:

Name Type Description Default
vals Array

The new parameter values. Should be in the same order as pars.

required
errs Array

The new parameter errors. Should be in the same order as pars.

required
chisq Array

The new chi-squared. Should be a scalar float array.

required

Returns:

Name Type Description
updated MetaModel

The updated metamodel. While nominally the metamodel will update in place, returning it alows us to use this function in JITed functions.

Source code in witch/containers/metamodel.py
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
def update(self, vals: jax.Array, errs: jax.Array, chisq: jax.Array) -> Self:
    """
    Update the parameter values and errors as well as the model chi-squared
    for all models in the metamodel.

    Parameters
    ----------
    vals : jax.Array
        The new parameter values.
        Should be in the same order as `pars`.
    errs : jax.Array
        The new parameter errors.
        Should be in the same order as `pars`.
    chisq : jax.Array
        The new chi-squared.
        Should be a scalar float array.

    Returns
    -------
    updated : MetaModel
        The updated metamodel.
        While nominally the metamodel will update in place, returning it
        alows us to use this function in JITed functions.
    """
    self.parameters = vals
    self.errors = errs
    self.chisq = chisq
    self.models = tuple(
        deepcopy(model).update(
            vals[jnp.array(par_map)], errs[jnp.array(par_map)], chisq
        )
        for model, par_map in zip(self.models, self.parameter_map)
    )

    return copy(self)

deep_merge(a, b)

Based on https://gist.github.com/angstwad/bf22d1822c38a92ec0a9?permalink_comment_id=3517209

Source code in witch/fitter.py
588
589
590
591
592
593
594
595
596
597
598
599
def deep_merge(a: dict, b: dict) -> dict:
    """
    Based on https://gist.github.com/angstwad/bf22d1822c38a92ec0a9?permalink_comment_id=3517209
    """
    result = deepcopy(a)
    for bk, bv in b.items():
        av = result.get(bk)
        if isinstance(av, dict) and isinstance(bv, dict):
            result[bk] = deep_merge(av, bv)
        else:
            result[bk] = deepcopy(bv)
    return result

joint_objective(metamodel, do_loglike=True, do_grad=True, do_curve=True)

Compute the objective for multiple datasets in an MPI aware way.

Parameters:

Name Type Description Default
metamodel MetaModel

The MetaModel we are trying to fit.

required
do_loglike bool

If True then we will compute the log-likelihood between the model and the data.

True
do_grad bool

If True then compute the gradient of chi-squared with respect to the model parameters.

True
do_curve bool

If True than compute the curvature of chi-squared with respect to the model parameters.

True

Returns:

Name Type Description
loglike Array

The log-likelihood between the model and data. If do_loglike is False then this is jnp.array(0).

grad Array

The gradient of the parameters at there current values. If do_grad is False then this is an array of zeros. This is a (npar,) array.

curve Array

The curvature of the parameter space at the current values. If do_curve is False then this is an array of zeros. This is a (npar, npar) array.

Source code in witch/objective.py
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
@partial(jax.jit, static_argnames=("do_loglike", "do_grad", "do_curve"))
def joint_objective(
    metamodel: "MetaModel",
    do_loglike: bool = True,
    do_grad: bool = True,
    do_curve: bool = True,
) -> tuple[jax.Array, jax.Array, jax.Array]:
    """
    Compute the objective for multiple datasets in an MPI aware way.


    Parameters
    ----------
    metamodel : MetaModel
        The `MetaModel` we are trying to fit.
    do_loglike : bool, default: True
        If True then we will compute the log-likelihood between
        the model and the data.
    do_grad : bool, default: True
        If True then compute the gradient of chi-squared with
        respect to the model parameters.
    do_curve : bool, default: True
        If True than compute the curvature of chi-squared with
        respect to the model parameters.

    Returns
    -------
    loglike : jax.Array
        The log-likelihood between the model and data.
        If `do_loglike` is `False` then this is `jnp.array(0)`.
    grad : jax.Array
        The gradient of the parameters at there current values.
        If `do_grad` is `False` then this is an array of zeros.
        This is a `(npar,)` array.
    curve : jax.Array
        The curvature of the parameter space at the current values.
        If `do_curve` is `False` then this is an array of zeros.
        This is a `(npar, npar)` array.
    """
    global_comm = metamodel.global_comm
    npar = len(metamodel.parameters)
    loglike = jnp.array(0)
    grad = jnp.zeros(npar)
    curve = jnp.zeros((npar, npar))
    for i in range(len(metamodel.datasets)):
        _loglike, _grad, _curve = metamodel.datasets[i].objective(
            metamodel,
            i,
            do_loglike,
            do_grad,
            do_curve,
        )
        loglike += _loglike
        grad = grad.at[:].add(_grad)
        curve = curve.at[:].add(_curve)
    mpi4jax.barrier(comm=global_comm)
    if do_loglike:
        loglike = mpi4jax.allreduce(loglike, MPI.SUM, comm=global_comm)
    if do_grad:
        grad = mpi4jax.allreduce(grad, MPI.SUM, comm=global_comm)
    if do_curve:
        curve = mpi4jax.allreduce(curve, MPI.SUM, comm=global_comm)

    return loglike, grad, curve

load_config(start_cfg, cfg_path)

We want to load a config and if it has the key "base", load that as well and merge them. We only want to take things from base that are not in the original config so we merge the original into the newly loaded one.

Source code in witch/fitter.py
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
def load_config(start_cfg, cfg_path):
    """
    We want to load a config and if it has the key "base",
    load that as well and merge them.
    We only want to take things from base that are not in the original config
    so we merge the original into the newly loaded one.
    """
    with open(cfg_path) as file:
        new_cfg = yaml.safe_load(file)
    cfg = deep_merge(new_cfg, start_cfg)
    if "base" in new_cfg:
        base_path = new_cfg["base"]
        if not os.path.isabs(base_path):
            base_path = os.path.join(os.path.dirname(cfg_path), base_path)
        return load_config(cfg, base_path)
    return cfg

para_to_non_para(model, n_rounds, to_copy, rmax, struct_num, sig_params, default, mm_cfg)

Function which approximately converts cluster profiles into a non-parametric form. Note this is only approximate and should be fit afterwords.

Parameters:

Name Type Description Default
model Model

The parametric model to start from.

required
n_rounds Optional[int]

Number of rounds to fit for output model. If none, copy from self

required
to_copy list[str]

List of structures, by name, to copy.

required
rmax float

Maximum radius of the rbins

required
struct_num int

Structure within model to calculate rbins on

required
sig_params list[int]

Parameters to consider for computing significance. Only first match will be used.

required
default tuple[int, ...]

Default rbins to be returned if generation fails.

required
mm_cfg dict

MetaModel config, parameter_map will be modified if need be.

required

Returns:

Name Type Description
Model Model

Model with a non-parametric representation of input model

Raises:

Type Description
ValueError

If there are no models to copy

Source code in witch/nonparametric.py
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
def para_to_non_para(
    model,
    n_rounds: Optional[int],
    to_copy: list[str],
    rmax: float,
    struct_num: int,
    sig_params: list[int],
    default: tuple[int, ...],
    mm_cfg: dict,
) -> Model:
    """
    Function which approximately converts cluster profiles into a non-parametric form. Note this is
    only approximate and should be fit afterwords.

    Parameters
    ----------
    model : Model
        The parametric model to start from.
    n_rounds: Optional int | None
        Number of rounds to fit for output model. If none, copy from self
    to_copy : list[str]
        List of structures, by name, to copy.
    rmax : float
        Maximum radius of the rbins
    struct_num : int
        Structure within model to calculate rbins on
    sig_params: list[str]
        Parameters to consider for computing significance.
        Only first match will be used.
    default: tuple[int, ...]
        Default rbins to be returned if generation fails.
    mm_cfg: dict
        MetaModel config, parameter_map will be modified if need be.
    Returns
    -------
    Model : Model
        Model with a non-parametric representation of input model
    Raises
    ------
    ValueError
        If there are no models to copy
    """
    cur_model = deepcopy(
        model
    )  # Make a copy of model, we don't want to lose structures
    i = 0  # Make sure we keep at least one struct
    to_remove = []
    for structure in cur_model.structures:
        if structure.structure not in to_copy:
            to_remove.append(structure.name)

    if len(to_remove) == len(cur_model.structures):
        raise ValueError("Error: no model structures in {}".format(to_copy))
    for struct in to_remove:
        cur_model.remove_struct(struct)
    params = jnp.array(cur_model.pars)
    params = jnp.ravel(params)
    pressure = core.model3D(
        cur_model.xyz, tuple(cur_model.n_struct), tuple(cur_model.n_rbins), params
    )
    pressure = pressure[
        ..., int(pressure.shape[2] / 2)
    ]  # Take middle slice. Close enough is good enough here, dont care about rounding

    pixsize = np.abs(cur_model.xyz[1][0][1] - cur_model.xyz[1][0][0]).item()

    rs, bin1d, _ = wu.bin_map(pressure, pixsize)

    rbins = get_rbins(cur_model, rmax, struct_num, sig_params, default)
    rbins = np.append(rbins, np.array([np.amax(rs)]))

    condlist = [
        jnp.array((rbins[i] <= rs) & (rs < rbins[i + 1]))
        for i in range(len(rbins) - 2, -1, -1)
    ]

    amps, pows, c = profile_to_broken_power(rs, bin1d, condlist, rbins)

    priors = (-1 * np.inf, np.inf)
    if n_rounds is None:
        n_rounds = model.n_rounds
    if not isinstance(n_rounds, int):
        raise ValueError("Non int n_rounds")
    parameters = [
        Parameter(
            "rbins",
            tuple([False] * n_rounds),
            jnp.atleast_1d(jnp.array(rbins[:-1], dtype=float)),  # Drop last bin
            jnp.zeros_like(jnp.atleast_1d(jnp.array(rbins[:-1])), dtype=float),
            jnp.array(priors, dtype=float),
        ),
        Parameter(
            "amps",
            tuple([True] * n_rounds),
            jnp.atleast_1d(jnp.array(amps, dtype=float)),
            jnp.zeros_like(jnp.atleast_1d(jnp.array(amps)), dtype=float),
            jnp.array(priors, dtype=float),
        ),
        Parameter(
            "pows",
            tuple([True] * n_rounds),
            jnp.atleast_1d(jnp.array(pows, dtype=float)),
            jnp.zeros_like(jnp.atleast_1d(jnp.array(pows)), dtype=float),
            jnp.array(priors, dtype=float),
        ),
        Parameter(
            "dx",  # TODO: miscentering
            tuple([False] * n_rounds),
            jnp.atleast_1d(jnp.array(0, dtype=float)),
            jnp.zeros_like(jnp.atleast_1d(jnp.array(0)), dtype=float),
            jnp.array(priors, dtype=float),
        ),
        Parameter(
            "dy",  # TODO: miscentering
            tuple([False] * n_rounds),
            jnp.atleast_1d(jnp.array(0, dtype=float)),
            jnp.zeros_like(jnp.atleast_1d(jnp.array(0)), dtype=float),
            jnp.array(priors, dtype=float),
        ),
        Parameter(
            "dz",  # TODO: miscentering
            tuple([False] * n_rounds),
            jnp.atleast_1d(jnp.array(0, dtype=float)),
            jnp.zeros_like(jnp.atleast_1d(jnp.array(0)), dtype=float),
            jnp.array(priors, dtype=float),
        ),
        Parameter(
            "c",
            tuple([True] * n_rounds),
            jnp.atleast_1d(jnp.array(c, dtype=float)),
            jnp.zeros_like(jnp.atleast_1d(jnp.array(0)), dtype=float),
            jnp.array(priors, dtype=float),
        ),
    ]

    structure = Structure(
        "nonpara_power", "nonpara_power", parameters, n_rbins=len(rbins) - 1
    )
    structures = list(model.structures)
    if "parameter_map" in mm_cfg:
        mm_cfg["parameter_map"] = [
            [p for p in pm if f".{structures[struct_num].name}." in p]
            for pm in mm_cfg["parameter_map"]
        ]
    structures[struct_num] = structure

    return Model(
        name=model.name,
        structures=tuple(structures),
        xyz=model.xyz,
        dz=model.dz,
        n_rounds=n_rounds,
        cur_round=0,
        to_run=model.to_run,
    )

print_once(*args)

Helper function to print only once when running with MPI. Only the rank 0 process will print.

Parameters:

Name Type Description Default
*args Unpack[tuple[Any, ...]]

Arguments to pass to print.

()
Source code in witch/fitter.py
38
39
40
41
42
43
44
45
46
47
48
49
50
def print_once(*args: Unpack[tuple[Any, ...]]):
    """
    Helper function to print only once when running with MPI.
    Only the rank 0 process will print.

    Parameters
    ----------
    *args : Unpack[tuple[Any, ...]]
        Arguments to pass to print.
    """
    if comm.Get_rank() == 0:
        print(*args)
        sys.stdout.flush()

run_lmfit(metamodel, maxiter=10, chitol=1e-05)

Fit a set of models to datasets jointly. This uses a modified Levenberg–Marquardt fitter with flat priors. This function is MPI aware.

Parameters:

Name Type Description Default
metamodel MetaModel

The MetaModel that describes the models and datasets.

required
maxiter int

The maximum number of iterations to fit.

10
chitol float

The delta chisq to use as the convergence criteria.

1e-5

Returns:

Name Type Description
models tuple[Model, ...]

Model with the final set of fit parameters, errors, and chisq.

final_iter int

The number of iterations the fitter ran for.

delta_chisq float

The final delta chisq.

Source code in witch/fitting.py
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
def run_lmfit(
    metamodel: MetaModel,
    maxiter: int = 10,
    chitol: float = 1e-5,
) -> tuple[MetaModel, int, jax.Array]:
    """
    Fit a set of models to datasets jointly.
    This uses a modified Levenberg–Marquardt fitter with flat priors.
    This function is MPI aware.

    Parameters
    ----------
    metamodel : MetaModel
        The MetaModel that describes the models and datasets.
    maxiter : int, default: 10
        The maximum number of iterations to fit.
    chitol : float, default: 1e-5
        The delta chisq to use as the convergence criteria.

    Returns
    -------
    models : tuple[Model, ...]
        Model with the final set of fit parameters, errors, and chisq.
    final_iter : int
        The number of iterations the fitter ran for.
    delta_chisq : float
        The final delta chisq.
    """
    zero = jnp.array(0.0, jnp.float32)
    chitol = jnp.float32(chitol)
    tf = np.where(np.array(metamodel.to_fit))[0]

    @jax.jit
    def _cond_func(val):
        i, delta_chisq, lmd, *_ = val
        iterbool = jax.lax.lt(i, maxiter)
        chisqbool = jax.lax.ge(delta_chisq, chitol) + jax.lax.gt(lmd, zero)
        return iterbool * chisqbool

    @jax.jit
    def _body_func(val):
        i, delta_chisq, lmd, metamodel, curve, grad = val
        curve_use = curve.at[:].add(lmd * jnp.diag(jnp.diag(curve)))
        # Get the step
        step = jnp.dot(
            invscale(curve_use.at[tf, :].get().at[:, tf].get()), grad.at[tf].get()
        )
        new_pars, to_fit = _prior_pars_fit(
            metamodel.priors,
            metamodel.parameters.at[tf].add(step),
            jnp.array(metamodel.to_fit),
        )
        # Get errs
        errs = jnp.where(
            to_fit, jnp.sqrt(jnp.diag(invscale(curve_use, do_invsafe=True))), 0
        )
        # Now lets get an updated model
        new_metamodel = copy(metamodel).update(new_pars, errs, metamodel.chisq)
        new_chisq, new_grad, new_curve = joint_objective(
            new_metamodel, True, True, True
        )
        new_metamodel = copy(new_metamodel).update(new_pars, errs, new_chisq)

        new_delta_chisq = jnp.astype(metamodel.chisq - new_metamodel.chisq, jnp.float32)
        metamodel, grad, curve, delta_chisq, lmd = jax.lax.cond(
            new_delta_chisq > 0,
            _success,
            _failure,
            metamodel,
            new_metamodel,
            grad,
            new_grad,
            curve,
            new_curve,
            delta_chisq,
            new_delta_chisq,
            lmd,
        )

        return (i + 1, delta_chisq, lmd, metamodel, curve, grad)

    pars, _ = _prior_pars_fit(
        metamodel.priors, metamodel.parameters, jnp.array(metamodel.to_fit)
    )
    metamodel = metamodel.update(pars, metamodel.errors, metamodel.chisq)
    _, grad, curve = joint_objective(metamodel, True, True, True)
    i, delta_chisq, _, metamodel, *_ = jax.lax.while_loop(
        _cond_func,
        _body_func,
        (0, jnp.astype(jnp.inf, jnp.float32), zero.copy(), metamodel, curve, grad),
    )

    return metamodel, i, delta_chisq