Objective. Decoding visual attention from brain signals during naturalistic video viewing has emerged as a new direction in brain-computer interface research. Current methods assum... Activation: brain-computer interface, visual attention, eeg
Objective. Decoding visual attention from brain signals during naturalistic video viewing has emerged as a new direction in brain-computer interface research. Current methods assume that stronger coupling between object motion and neural activity indicates higher attention, but this can be confounded by eye movement artifacts and stimulus properties. This study investigates how visual eccentricity (the distance between a visual object and the fixation point) affects neural responses when eye movement artifacts are controlled. Approach. EEG signals were recorded across three tasks that manipulated object eccentricity and attention conditions while participants maintained gaze fixation. Correlation analysis and match-mismatch decoding were performed to quantify the neural tracking of object
Decoding visual attention from brain signals during naturalistic video viewing has emerged as a new direction in brain-computer interface research.
Current methods assume that stronger coupling between object motion and neural activity indicates higher attention, but this can be confounded by eye movement artifacts and stimulus properties.
This study investigates how visual eccentricity (the distance between a visual object and the fixation point) affects neural responses when eye movement artifacts are controlled.
EEG signals were recorded across three tasks that manipulated object eccentricity and attention conditions while participants maintained gaze fixation.
import numpy as np
class EEGVisualAttentionDecoder:
def __init__(self, n_eeg_channels, n_video_features):
self.weights = np.zeros((n_video_features, n_eeg_channels))
self.time_lags = np.arange(0, 500, 10) # ms
def fit_with_eccentricity_correction(self, eeg_data, video_features, eccentricity_map):
# Control for eccentricity as confound variable
X_video = self._create_lagged_matrix(video_features, self.time_lags)
X_ecc = self._create_lagged_matrix(eccentricity_map, self.time_lags)
X_combined = np.hstack([X_video, X_ecc])
# Ridge regression with confound control
from sklearn.linear_model import Ridge
model = Ridge(alpha=1.0)
model.fit(X_combined, eeg_data)
return model