The following table contains a list of all VMware vCenter Server Events.
Key | Message |
---|---|
Key | Message |
AccountCreatedEvent | Account {spec.id} was created on host {host.name} |
AccountRemovedEvent | Account {account} was removed on host {host.name} |
AccountUpdatedEvent | Account {spec.id} was updated on host {host.name} |
AdminPasswordNotChangedEvent | The default password for the root user on the host {host.name} has not been changed |
AlarmAcknowledgedEvent | Acknowledged alarm '{alarm.name}' on {entity.name} |
AlarmActionTriggeredEvent | Alarm '{alarm.name}' on {entity.name} triggered an action |
AlarmClearedEvent | Manually cleared alarm '{alarm.name}' on {entity.name} from {from.@enum.ManagedEntity.Status} |
AlarmCreatedEvent | Created alarm '{alarm.name}' on {entity.name} |
AlarmEmailCompletedEvent | Alarm '{alarm.name}' on {entity.name} sent email to {to} |
AlarmEmailFailedEvent | Alarm '{alarm.name}' on {entity.name} cannot send email to {to} |
AlarmReconfiguredEvent | Reconfigured alarm '{alarm.name}' on {entity.name} |
AlarmRemovedEvent | Removed alarm '{alarm.name}' on {entity.name} |
AlarmScriptCompleteEvent | Alarm '{alarm.name}' on {entity.name} ran script {script} |
AlarmScriptFailedEvent | Alarm '{alarm.name}' on {entity.name} did not complete script: {reason.msg} |
AlarmSnmpCompletedEvent | Alarm '{alarm.name}': an SNMP trap for entity {entity.name} was sent |
AlarmSnmpFailedEvent | Alarm '{alarm.name}' on entity {entity.name} did not send SNMP trap: {reason.msg} |
AlarmStatusChangedEvent | Alarm '{alarm.name}' on {entity.name} changed from {from.@enum.ManagedEntity.Status} to {to.@enum.ManagedEntity.Status} |
AllVirtualMachinesLicensedEvent | All running virtual machines are licensed |
AlreadyAuthenticatedSessionEvent | User cannot logon since the user is already logged on |
BadUsernameSessionEvent | Cannot login {userName}@{ipAddress} |
CanceledHostOperationEvent | The operation performed on host {host.name} in {datacenter.name} was canceled |
ChangeOwnerOfFileEvent | Changed ownership of file name {filename} from {oldOwner} to {newOwner} on {host.name} in {datacenter.name}. |
ChangeOwnerOfFileFailedEvent | Cannot change ownership of file name {filename} from {owner} to {attemptedOwner} on {host.name} in {datacenter.name}. |
ClusterComplianceCheckedEvent | Checked cluster for compliance |
ClusterCreatedEvent | Created cluster {computeResource.name} in {datacenter.name} |
ClusterDestroyedEvent | Removed cluster {computeResource.name} in datacenter {datacenter.name} |
ClusterOvercommittedEvent | Insufficient capacity in cluster {computeResource.name} to satisfy resource configuration in {datacenter.name} |
ClusterReconfiguredEvent | Reconfigured cluster {computeResource.name} in datacenter {datacenter.name} |
ClusterStatusChangedEvent | Configuration status on cluster {computeResource.name} changed from {oldStatus.@enum.ManagedEntity.Status} to {newStatus.@enum.ManagedEntity.Status} in {datacenter.name} |
CustomFieldDefAddedEvent | Created new custom field definition {name} |
CustomFieldDefRemovedEvent | Removed field definition {name} |
CustomFieldDefRenamedEvent | Renamed field definition from {name} to {newName} |
CustomFieldValueChangedEvent | Changed custom field {name} on {entity.name} in {datacenter.name} to {value} |
CustomizationFailed | Cannot complete customization of VM {vm.name}. See customization log at {logLocation} on the guest OS for details. |
CustomizationLinuxIdentityFailed | An error occurred while setting up Linux identity. See log file '{logLocation}' on guest OS for details. |
CustomizationNetworkSetupFailed | An error occurred while setting up network properties of the guest OS. See the log file {logLocation} in the guest OS for details. |
CustomizationStartedEvent | Started customization of VM {vm.name}. Customization log located at {logLocation} in the guest OS. |
CustomizationSucceeded | Customization of VM {vm.name} succeeded. Customization log located at {logLocation} in the guest OS. |
CustomizationSysprepFailed | The version of Sysprep {sysprepVersion} provided for customizing VM {vm.name} does not match the version of guest OS {systemVersion}. See the log file {logLocation} in the guest OS for more information. |
CustomizationUnknownFailure | An error occurred while customizing VM {vm.name}. For details reference the log file {logLocation} in the guest OS. |
DasAdmissionControlDisabledEvent | vSphere HA admission control disabled for cluster {computeResource.name} in {datacenter.name} |
DasAdmissionControlEnabledEvent | vSphere HA admission control enabled for cluster {computeResource.name} in {datacenter.name} |
DasAgentFoundEvent | Re-established contact with a primary host in this vSphere HA cluster |
DasAgentUnavailableEvent | Unable to contact a primary vSphere HA agent in cluster {computeResource.name} in {datacenter.name} |
DasClusterIsolatedEvent | All hosts in the vSphere HA cluster {computeResource.name} in {datacenter.name} were isolated from the network. Check the network configuration for proper network redundancy in the management network. |
DasDisabledEvent | vSphere HA disabled for cluster {computeResource.name} in {datacenter.name} |
DasEnabledEvent | vSphere HA enabled for cluster {computeResource.name} in {datacenter.name} |
DasHostFailedEvent | A possible host failure has been detected by vSphere HA on {failedHost.name} in cluster {computeResource.name} in {datacenter.name} |
DasHostIsolatedEvent | Host {isolatedHost.name} has been isolated from cluster {computeResource.name} in {datacenter.name} |
DatacenterCreatedEvent | Created datacenter {datacenter.name} in folder {parent.name} |
DatacenterRenamedEvent | Renamed datacenter from {oldName} to {newName} |
DatastoreCapacityIncreasedEvent | Datastore {datastore.name} increased in capacity from {oldCapacity} bytes to {newCapacity} bytes in {datacenter.name} |
DatastoreDestroyedEvent | Removed unconfigured datastore {datastore.name} |
DatastoreDiscoveredEvent | Discovered datastore {datastore.name} on {host.name} in {datacenter.name} |
DatastoreDuplicatedEvent | Multiple datastores named {datastore} detected on host {host.name} in {datacenter.name} |
DatastoreFileCopiedEvent | File or directory {sourceFile} copied from {sourceDatastore.name} to {datastore.name} as {targetFile} |
DatastoreFileDeletedEvent | File or directory {targetFile} deleted from {datastore.name} |
DatastoreFileMovedEvent | File or directory {sourceFile} moved from {sourceDatastore.name} to {datastore.name} as {targetFile} |
DatastoreIORMReconfiguredEvent | Reconfigured Storage I/O Control on datastore {datastore.name} |
DatastorePrincipalConfigured | Configured datastore principal {datastorePrincipal} on host {host.name} in {datacenter.name} |
DatastoreRemovedOnHostEvent | Removed datastore {datastore.name} from {host.name} in {datacenter.name} |
DatastoreRenamedEvent | Renamed datastore from {oldName} to {newName} in {datacenter.name} |
DatastoreRenamedOnHostEvent | Renamed datastore from {oldName} to {newName} in {datacenter.name} |
DrsDisabledEvent | Disabled DRS on cluster {computeResource.name} in datacenter {datacenter.name} |
DrsEnabledEvent | Enabled DRS on {computeResource.name} with automation level {behavior} in {datacenter.name} |
DrsEnteredStandbyModeEvent | DRS put {host.name} into standby mode |
DrsEnteringStandbyModeEvent | DRS is putting {host.name} into standby mode |
DrsExitedStandbyModeEvent | DRS moved {host.name} out of standby mode |
DrsExitingStandbyModeEvent | DRS is moving {host.name} out of standby mode |
DrsExitStandbyModeFailedEvent | DRS cannot move {host.name} out of standby mode |
DrsInvocationFailedEvent | DRS invocation not completed |
DrsRecoveredFromFailureEvent | DRS has recovered from the failure |
DrsResourceConfigureFailedEvent | Unable to apply DRS resource settings on host {host.name} in {datacenter.name}. {reason.msg}. This can significantly reduce the effectiveness of DRS. |
DrsResourceConfigureSyncedEvent | Resource configuration specification returns to synchronization from previous failure on host '{host.name}' in {datacenter.name} |
DrsRuleComplianceEvent | {vm.name} on {host.name} in {datacenter.name} is now compliant with DRS VM-Host affinity rules |
DrsRuleViolationEvent | {vm.name} on {host.name} in {datacenter.name} is violating a DRS VM-Host affinity rule |
DrsSoftRuleViolationEvent | {vm.name} on {host.name} in {datacenter.name} is violating a DRS VM-Host soft affinity rule |
DrsVmMigratedEvent | DRS migrated {vm.name} from {sourceHost.name} to {host.name} in cluster {computeResource.name} in {datacenter.name} |
DrsVmPoweredOnEvent | DRS powered On {vm.name} on {host.name} in {datacenter.name} |
DuplicateIpDetectedEvent | Virtual machine {macAddress} on host {host.name} has a duplicate IP {duplicateIP} |
DvpgImportEvent | Import operation with type {importType} was performed on {net.name} |
DvpgRestoreEvent | Restore operation was performed on {net.name} |
DVPortgroupCreatedEvent | dvPort group {net.name} in {datacenter.name} was added to switch {dvs.name}. |
DVPortgroupDestroyedEvent | dvPort group {net.name} in {datacenter.name} was deleted. |
DVPortgroupEvent | |
DVPortgroupReconfiguredEvent | dvPort group {net.name} in {datacenter.name} was reconfigured. |
DVPortgroupRenamedEvent | dvPort group {oldName} in {datacenter.name} was renamed to {newName} |
DvsCreatedEvent | A vSphere Distributed Switch {dvs.name} was created in {datacenter.name}. |
DvsDestroyedEvent | vSphere Distributed Switch {dvs.name} in {datacenter.name} was deleted. |
DvsEvent | vSphere Distributed Switch event |
DvsHealthStatusChangeEvent | Health check status was changed in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name} |
DvsHostBackInSyncEvent | The vSphere Distributed Switch {dvs.name} configuration on the host was synchronized with that of the vCenter Server. |
DvsHostJoinedEvent | The host {hostJoined.name} joined the vSphere Distributed Switch {dvs.name} in {datacenter.name}. |
DvsHostLeftEvent | The host {hostLeft.name} left the vSphere Distributed Switch {dvs.name} in {datacenter.name}. |
DvsHostStatusUpdated | The host {hostMember.name} changed status on the vSphere Distributed Switch {dvs.name} in {datacenter.name}. |
DvsHostWentOutOfSyncEvent | The vSphere Distributed Switch {dvs.name} configuration on the host differed from that of the vCenter Server. |
DvsImportEvent | Import operation with type {importType} was performed on {dvs.name} |
DvsMergedEvent | vSphere Distributed Switch {srcDvs.name} was merged into {dstDvs.name} in {datacenter.name}. |
DvsPortBlockedEvent | The dvPort {portKey} was blocked in the vSphere Distributed Switch {dvs.name} in {datacenter.name}. |
DvsPortConnectedEvent | The dvPort {portKey} was connected in the vSphere Distributed Switch {dvs.name} in {datacenter.name} |
DvsPortCreatedEvent | New ports were created in the vSphere Distributed Switch {dvs.name} in {datacenter.name}. |
DvsPortDeletedEvent | Deleted ports in the vSphere Distributed Switch {dvs.name} in {datacenter.name}. |
DvsPortDisconnectedEvent | The dvPort {portKey} was disconnected in the vSphere Distributed Switch {dvs.name} in {datacenter.name}. |
DvsPortEnteredPassthruEvent | The dvPort {portKey} was in passthrough mode in the vSphere Distributed Switch {dvs.name} in {datacenter.name}. |
DvsPortExitedPassthruEvent | The dvPort {portKey} was not in passthrough mode in the vSphere Distributed Switch {dvs.name} in {datacenter.name}. |
DvsPortJoinPortgroupEvent | The dvPort {portKey} was moved into the dvPort group {portgroupName} in {datacenter.name}. |
DvsPortLeavePortgroupEvent | The dvPort {portKey} was moved out of the dvPort group {portgroupName} in {datacenter.name}. |
DvsPortLinkDownEvent | The dvPort {portKey} link was down in the vSphere Distributed Switch {dvs.name} in {datacenter.name} |
DvsPortLinkUpEvent | The dvPort {portKey} link was up in the vSphere Distributed Switch {dvs.name} in {datacenter.name} |
DvsPortReconfiguredEvent | Reconfigured ports in the vSphere Distributed Switch {dvs.name} in {datacenter.name}. |
DvsPortRuntimeChangeEvent | The dvPort {portKey} runtime information changed in the vSphere Distributed Switch {dvs.name} in {datacenter.name}. |
DvsPortUnblockedEvent | The dvPort {portKey} was unblocked in the vSphere Distributed Switch {dvs.name} in {datacenter.name}. |
DvsPortVendorSpecificStateChangeEvent | The dvPort {portKey} vendor specific state changed in the vSphere Distributed Switch {dvs.name} in {datacenter.name}. |
DvsReconfiguredEvent | The vSphere Distributed Switch {dvs.name} in {datacenter.name} was reconfigured. |
DvsRenamedEvent | The vSphere Distributed Switch {oldName} in {datacenter.name} was renamed to {newName}. |
DvsRestoreEvent | Restore operation was performed on {dvs.name} |
DvsUpgradeAvailableEvent | An upgrade for the vSphere Distributed Switch {dvs.name} in datacenter {datacenter.name} is available. |
DvsUpgradedEvent | vSphere Distributed Switch {dvs.name} in datacenter {datacenter.name} was upgraded. |
DvsUpgradeInProgressEvent | An upgrade for the vSphere Distributed Switch {dvs.name} in datacenter {datacenter.name} is in progress. |
DvsUpgradeRejectedEvent | Cannot complete an upgrade for the vSphere Distributed Switch {dvs.name} in datacenter {datacenter.name} |
EnteredMaintenanceModeEvent | Host {host.name} in {datacenter.name} has entered maintenance mode |
EnteredStandbyModeEvent | The host {host.name} is in standby mode |
EnteringMaintenanceModeEvent | Host {host.name} in {datacenter.name} has started to enter maintenance mode |
EnteringStandbyModeEvent | The host {host.name} is entering standby mode |
ErrorUpgradeEvent | {message} |
com.vmware.license.AddLicenseEvent | License {licenseKey} added to VirtualCenter |
com.vmware.license.AssignLicenseEvent | License {licenseKey} assigned to asset {entityName} with id {entityId} |
com.vmware.license.DLFDownloadFailedEvent | Failed to download license information from the host {hostname} due to {errorReason} |
com.vmware.license.LicenseAssignFailedEvent | License assignment on the host fails. Reasons: {errorMessage.@enum.com.vmware.license.LicenseAssignError}. |
com.vmware.license.LicenseCapacityExceededEvent | The current license usage ({currentUsage} {costUnitText}) for {edition} exceeds the license capacity ({capacity} {costUnitText}) |
com.vmware.license.LicenseExpiryEvent | Your host license expires in {remainingDays} days. The host will disconnect from vCenter Server when its license expires. |
com.vmware.license.LicenseUserThresholdExceededEvent | The current license usage ({currentUsage} {costUnitText}) for {edition} exceeds the user-defined threshold ({threshold} {costUnitText}) |
com.vmware.license.RemoveLicenseEvent | License {licenseKey} removed from VirtualCenter |
com.vmware.license.UnassignLicenseEvent | License unassigned from asset {entityName} with id {entityId} |
com.vmware.pbm.profile.associate | Associated storage policy: {ProfileId} with entity: {EntityId} |
com.vmware.pbm.profile.delete | Deleted storage policy: {ProfileId} |
com.vmware.pbm.profile.dissociate | Dissociated storage policy: {ProfileId} from entity: {EntityId} |
com.vmware.pbm.profile.updateName | Storage policy name updated for {ProfileId}. New name: {NewProfileName} |
com.vmware.vc.HA.ClusterFailoverActionInitiatedEvent | vSphere HA initiated a failover action on {pendingVms} virtual machines in cluster {computeResource.name} in datacenter {datacenter.name} |
com.vmware.vc.HA.ClusterFailoverInProgressEvent | vSphere HA failover operation in progress in cluster {computeResource.name} in datacenter {datacenter.name}: {numBeingPlaced} VMs being restarted, {numToBePlaced} VMs waiting for a retry, {numAwaitingResource} VMs waiting for resources, {numAwaitingVsanVmChange} inaccessible Virtual SAN VMs |
com.vmware.vc.HA.DasHostCompleteDatastoreFailureEvent | All shared datastores failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.HA.DasHostCompleteNetworkFailureEvent | All VM networks failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.HA.HeartbeatDatastoreChanged | Datastore {dsName} is {changeType.@enum.com.vmware.vc.HA.HeartbeatDatastoreChange} for storage heartbeating monitored by the vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.HA.HeartbeatDatastoreNotSufficient | The number of vSphere HA heartbeat datastores for host {host.name} in cluster {computeResource.name} in {datacenter.name} is {selectedNum}, which is less than required: {requiredNum} |
com.vmware.vc.HA.HostAgentErrorEvent | vSphere HA agent for host {host.name} has an error in {computeResource.name} in {datacenter.name}: {reason.@enum.com.vmware.vc.HA.HostAgentErrorReason} |
com.vmware.vc.HA.HostDasErrorEvent | vSphere HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} has an error: {reason.@enum.HostDasErrorEvent.HostDasErrorReason} |
com.vmware.vc.HA.HostStateChangedEvent | The vSphere HA availability state of the host {host.name} in cluster in {computeResource.name} in {datacenter.name} has changed to {newState.@enum.com.vmware.vc.HA.DasFdmAvailabilityState} |
com.vmware.vc.HA.HostUnconfiguredWithProtectedVms | Host {host.name} in cluster {computeResource.name} in {datacenter.name} is disconnected from vCenter Server, but contains {protectedVmCount} protected virtual machine(s) |
com.vmware.vc.HA.InvalidMaster | vSphere HA agent on host {remoteHostname} is an invalid master. The host should be examined to determine if it has been compromised. |
com.vmware.vc.HA.NotAllHostAddrsPingable | The vSphere HA agent on the host {host.name} in cluster {computeResource.name} in {datacenter.name} cannot reach some of the management network addresses of other hosts, and thus HA may not be able to restart VMs if a host failure occurs: {unpingableAddrs} |
com.vmware.vc.HA.StartFTSecondaryFailedEvent | vSphere HA agent failed to start Fault Tolerance secondary VM {secondaryCfgPath} on host {secondaryHost} for primary VM {vm.name} in cluster {computeResource.name} in {datacenter.name}. Reason : {fault.msg}. vSphere HA agent will retry until it times out. |
com.vmware.vc.HA.StartFTSecondarySucceededEvent | vSphere HA agent successfully started Fault Tolerance secondary VM {secondaryCfgPath} on host {secondaryHost} for primary VM {vm.name} in cluster {computeResource.name}. |
com.vmware.vc.HA.UserHeartbeatDatastoreRemoved | vSphere HA removed datastore {dsName} from the set of preferred heartbeat datastores selected for cluster {computeResource.name} in {datacenter.name} because the datastore is removed from inventory |
com.vmware.vc.HA.VcCannotCommunicateWithMasterEvent | vCenter Server cannot communicate with the master vSphere HA agent on {hostname} in cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.HA.VcConnectedToMasterEvent | vCenter Server is connected to a master HA agent running on host {hostname} in {computeResource.name} in {datacenter.name} |
com.vmware.vc.HA.VcDisconnectedFromMasterEvent | vCenter Server is disconnected from a master HA agent running on host {hostname} in {computeResource.name} in {datacenter.name} |
com.vmware.vc.VCHealthStateChangedEvent | vCenter Service overall health changed from '{oldState}' to '{newState}' |
com.vmware.vc.VmCloneFailedInvalidDestinationEvent | Cannot clone {vm.name} as {destVmName} to invalid or non-existent destination with ID {invalidMoRef}: {fault} |
com.vmware.vc.VmCloneToResourcePoolFailedEvent | Cannot clone {vm.name} as {destVmName} to resource pool {destResourcePool}: {fault} |
com.vmware.vc.certmgr.HostCaCertsAndCrlsUpdatedEvent | CA Certificates were updated on {hostname} |
com.vmware.vc.certmgr.HostCertExpirationImminentEvent | Host Certificate expiration is imminent on {hostname}. Expiration Date: {expiryDate} |
com.vmware.vc.certmgr.HostCertExpiringEvent | Host Certificate on {hostname} is nearing expiration. Expiration Date: {expiryDate} |
com.vmware.vc.certmgr.HostCertExpiringShortlyEvent | Host Certificate on {hostname} will expire soon. Expiration Date: {expiryDate} |
com.vmware.vc.certmgr.HostCertRevokedEvent | Host Certificate on {hostname} is revoked. |
com.vmware.vc.certmgr.HostCertUpdatedEvent | Host Certificate was updated on {hostname}, new thumbprint: {thumbprint} |
com.vmware.vc.certmgr.HostMgmtAgentsRestartedEvent | Management Agents were restarted on {hostname} |
com.vmware.vc.cim.CIMGroupHealthStateChanged | Health of {data.group} changed from {data.oldState} to {data.newState}. {data.cause} |
com.vmware.vc.datastore.UpdateVmFilesFailedEvent | Failed to update VM files on datastore {ds.name} using host {hostName} |
com.vmware.vc.datastore.UpdatedVmFilesEvent | Updated VM files on datastore {ds.name} using host {hostName} |
com.vmware.vc.datastore.UpdatingVmFilesEvent | Updating VM files on datastore {ds.name} using host {hostName} |
com.vmware.vc.guestOperations.GuestOperation | Guest operation {operationName.@enum.com.vmware.vc.guestOp} performed on Virtual machine {vm.name}. |
com.vmware.vc.guestOperations.GuestOperationAuthFailure | Guest operation authentication failed for operation {operationName.@enum.com.vmware.vc.guestOp} on Virtual machine {vm.name}. |
com.vmware.vc.host.clear.vFlashResource.reachthreshold | Host's virtual flash resource usage dropped below {1}%. |
com.vmware.vc.host.problem.vFlashResource.reachthreshold | Host's virtual flash resource usage is more than {1}%. |
com.vmware.vc.host.vFlash.defaultModuleChangedEvent | Any new virtual Flash Read Cache configuration request will use {vFlashModule} as default virtual flash module. All existing virtual Flash Read Cache configurations remain unchanged. |
com.vmware.vc.iofilter.HostVendorProviderRegistrationFailedEvent | vSphere APIs for I/O Filters (VAIO) vendor provider {host.name} registration has failed. Reason : {fault.msg}. |
com.vmware.vc.iofilter.HostVendorProviderUnregistrationFailedEvent | Failed to unregister vSphere APIs for I/O Filters (VAIO) vendor provider {host.name}. Reason : {fault.msg}. |
com.vmware.vc.npt.VmAdapterEnteredPassthroughEvent | Network passthrough is active on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name} |
com.vmware.vc.npt.VmAdapterExitedPassthroughEvent | Network passthrough is inactive on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name} |
com.vmware.vc.ovfconsumers.CloneOvfConsumerStateErrorEvent | Failed to clone state for the entity '{entityName}' on extension {extensionName} |
com.vmware.vc.ovfconsumers.GetOvfEnvironmentSectionsErrorEvent | Failed to retrieve OVF environment sections for VM '{vm.name}' from extension {extensionName} |
com.vmware.vc.ovfconsumers.PowerOnAfterCloneErrorEvent | Powering on VM '{vm.name}' after cloning was blocked by an extension. Message: {description} |
com.vmware.vc.ovfconsumers.RegisterEntityErrorEvent | Failed to register entity '{entityName}' on extension {extensionName} |
com.vmware.vc.ovfconsumers.UnregisterEntitiesErrorEvent | Failed to unregister entities on extension {extensionName} |
com.vmware.vc.ovfconsumers.ValidateOstErrorEvent | Failed to validate OVF descriptor on extension {extensionName} |
com.vmware.vc.rp.ResourcePoolRenamedEvent | Resource pool '{oldName}' has been renamed to '{newName}' |
com.vmware.vc.sdrs.DatastoreInMultipleDatacentersEvent | Datastore cluster {objectName} has one or more datastores {datastore} shared across multiple datacenters |
com.vmware.vc.sdrs.StorageDrsEnabledEvent | Enabled storage DRS on datastore cluster {objectName} with automation level {behavior.@enum.storageDrs.PodConfigInfo.Behavior} |
com.vmware.vc.sdrs.StorageDrsNotSupportedHostConnectedToPodEvent | Datastore cluster {objectName} is connected to one or more hosts {host} that do not support storage DRS |
com.vmware.vc.sdrs.StorageDrsStorageMigrationEvent | Storage DRS migrated disks of VM {vm.name} to datastore {ds.name} |
com.vmware.vc.sdrs.StorageDrsStoragePlacementEvent | Storage DRS placed disks of VM {vm.name} on datastore {ds.name} |
com.vmware.vc.sdrs.StoragePodCreatedEvent | Created datastore cluster {objectName} |
com.vmware.vc.sdrs.StoragePodDestroyedEvent | Removed datastore cluster {objectName} |
com.vmware.vc.sioc.NotSupportedHostConnectedToDatastoreEvent | SIOC has detected that a host: {host} connected to a SIOC-enabled datastore: {objectName} is running an older version of ESX that does not support SIOC. This is an unsupported configuration. |
com.vmware.vc.sms.LunCapabilityInitEvent | Storage provider [{providerName}] : system capability warning for {eventSubjectId} : {msgTxt} |
com.vmware.vc.sms.LunCapabilityMetEvent | Storage provider [{providerName}] : system capability normal for {eventSubjectId} |
com.vmware.vc.sms.LunCapabilityNotMetEvent | Storage provider [{providerName}] : system capability alert for {eventSubjectId} : {msgTxt} |
com.vmware.vc.sms.ObjectTypeAlarmClearedEvent | Storage provider [{providerName}] cleared a Storage Alarm of type 'Object' on {eventSubjectId} : {msgTxt} |
com.vmware.vc.sms.ObjectTypeAlarmErrorEvent | Storage provider [{providerName}] raised an alert type 'Object' on {eventSubjectId} : {msgTxt} |
com.vmware.vc.sms.ObjectTypeAlarmWarningEvent | Storage provider [{providerName}] raised a warning of type 'Object' on {eventSubjectId} : {msgTxt} |
com.vmware.vc.sms.ThinProvisionedLunThresholdClearedEvent | Storage provider [{providerName}] : thin provisioning capacity threshold normal for {eventSubjectId} |
com.vmware.vc.sms.ThinProvisionedLunThresholdCrossedEvent | Storage provider [{providerName}] : thin provisioning capacity threshold alert for {eventSubjectId} |
com.vmware.vc.sms.ThinProvisionedLunThresholdInitEvent | Storage provider [{providerName}] : thin provisioning capacity threshold warning for {eventSubjectId} |
com.vmware.vc.sms.VasaProviderCertificateHardLimitReachedEvent | Certificate for storage provider {providerName} will expire very shortly. Expiration date : {expiryDate} |
com.vmware.vc.sms.VasaProviderCertificateSoftLimitReachedEvent | Certificate for storage provider {providerName} will expire soon. Expiration date : {expiryDate} |
com.vmware.vc.sms.VasaProviderCertificateValidEvent | Certificate for storage provider {providerName} is valid |
com.vmware.vc.sms.VasaProviderConnectedEvent | Storage provider {providerName} is connected |
com.vmware.vc.sms.VasaProviderDisconnectedEvent | Storage provider {providerName} is disconnected |
com.vmware.vc.sms.VasaProviderRefreshCACertsAndCRLsFailure | Refreshing CA certificates and CRLs failed for VASA providers with url : {providerUrls} |
com.vmware.vc.sms.datastore.ComplianceStatusCompliantEvent | Virtual disk {diskKey} on {vmName} connected to datastore {datastore.name} in {datacenter.name} is compliant from storage provider {providerName}. |
com.vmware.vc.sms.datastore.ComplianceStatusNonCompliantEvent | Virtual disk {diskKey} on {vmName} connected to {datastore.name} in {datacenter.name} is not compliant {operationalStatus] from storage provider {providerName}. |
com.vmware.vc.sms.datastore.ComplianceStatusUnknownEvent | Virtual disk {diskKey} on {vmName} connected to {datastore.name} in {datacenter.name} compliance status is unknown from storage provider {providerName}. |
com.vmware.vc.sms.provider.health.event | Storage provider [{providerName}] : health event for {eventSubjectId} : {msgTxt} |
com.vmware.vc.sms.provider.system.event | Storage provider [{providerName}] : system event : {msgTxt} |
com.vmware.vc.sms.vm.ComplianceStatusCompliantEvent | Virtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} is compliant from storage provider {providerName}. |
com.vmware.vc.sms.vm.ComplianceStatusNonCompliantEvent | Virtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} is not compliant {operationalStatus] from storage provider {providerName}. |
com.vmware.vc.sms.vm.ComplianceStatusUnknownEvent | Virtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} compliance status is unknown from storage provider {providerName}. |
com.vmware.vc.spbm.ProfileAssociationFailedEvent | Profile association/dissociation failed for {entityName} |
com.vmware.vc.spbm.ServiceErrorEvent | Configuring storage policy failed for VM {entityName}. Verify that SPBM service is healthy. Fault Reason : {errorMessage} |
com.vmware.vc.vcp.TestEndEvent | VM Component Protection test ends on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} |
com.vmware.vc.vcp.TestStartEvent | VM Component Protection test starts on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} |
com.vmware.vc.vcp.VmDatastoreFailedEvent | Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {datastore} |
com.vmware.vc.vcp.VmNetworkFailedEvent | Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {network} |
com.vmware.vc.vcp.VmPowerOffHangEvent | HA VM Component Protection could not power off virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} successfully after trying {numTimes} times and will keep trying |
com.vmware.vc.vcp.VmWaitForCandidateHostEvent | HA VM Component Protection could not find a destination host for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} after waiting {numSecWait} seconds and will keep trying |
com.vmware.vc.vflash.SsdConfigurationFailedEvent | Configuration on disk {disk.path} failed. Reason : {fault.msg} |
com.vmware.vc.vm.DstVmMigratedEvent | Virtual machine {vm.name} {newMoRef} in {computeResource.name} in {datacenter.name} was migrated from {oldMoRef} |
com.vmware.vc.vm.SrcVmMigratedEvent | Virtual machine {vm.name} {oldMoRef} in {computeResource.name} in {datacenter.name} was migrated to {newMoRef} |
com.vmware.vc.vm.VmAdapterResvNotSatisfiedEvent | Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} in datacenter {datacenter.name} is not satisfied |
com.vmware.vc.vm.VmAdapterResvSatisfiedEvent | Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} in datacenter {datacenter.name} is satisfied |
com.vmware.vc.vm.VmStateFailedToRevertToSnapshot | Failed to revert the execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} to snapshot {snapshotName}, with ID {snapshotId} |
com.vmware.vc.vm.VmStateRevertedToSnapshot | The execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} has been reverted to the state of snapshot {snapshotName}, with ID {snapshotId} |
com.vmware.vc.vmam.VmAppHealthMonitoringStateChangedEvent | vSphere HA detected that the application heartbeat status changed to {status.@enum.VirtualMachine.AppHeartbeatStatusType} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.vmam.VmAppHealthStateChangedEvent | vSphere HA detected that the application state changed to {state.@enum.vm.GuestInfo.AppStateType} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.vsan.ChecksumNotSupportedDiskFoundEvent | Virtual SAN disk {disk} on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} does not support checksum |
com.vmware.vc.vsan.DatastoreNoCapacityEvent | Virtual SAN datastore {datastoreName} in cluster {computeResource.name} in datacenter {datacenter.name} does not have capacity |
com.vmware.vc.vsan.HostCommunicationErrorEvent | Host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} cannot communicate with all other nodes in the Virtual SAN enabled cluster |
com.vmware.vc.vsan.HostVendorProviderDeregistrationFailedEvent | Virtual SAN vendor provider {host.name} registration has failed. Reason : {fault.msg}. |
com.vmware.vc.vsan.HostVendorProviderRegistrationFailedEvent | Virtual SAN vendor provider {host.name} registration has failed. Reason : {fault.msg}. |
com.vmware.vc.vsan.RogueHostFoundEvent | Found host(s) {hostString} participating in the Virtual SAN service in cluster {computeResource.name} in datacenter {datacenter.name} is not a member of this host's vCenter cluster |
com.vmware.vc.vsan.TurnDiskLocatorLedOffFailedEvent | Failed to turn off the locator LED of disk {disk.path}. Reason : {fault.msg} |
com.vmware.vc.vsan.TurnDiskLocatorLedOnFailedEvent | Failed to turn on the locator LED of disk {disk.path}. Reason : {fault.msg} |
com.vmware.vc.vsan.VsanHostNeedsUpgradeEvent | Virtual SAN cluster {computeResource.name} has one or more hosts that need disk format upgrade: {host}. For more detailed information of Virtual SAN upgrade, please see the 'Virtual SAN upgrade procedure' section in the documentation |
com.vmware.vim.vsm.dependency.bind.vApp | vService dependency '{dependencyName}' on vApp '{targetName}' bound to provider '{providerName}' |
com.vmware.vim.vsm.dependency.bind.vm | vService dependency '{dependencyName}' on '{vm.name}' bound to provider '{providerName}' |
com.vmware.vim.vsm.dependency.create.vApp | Created vService dependency '{dependencyName}' with type '{dependencyType}' on vApp '{targetName}' |
com.vmware.vim.vsm.dependency.create.vm | Created vService dependency '{dependencyName}' with type '{dependencyType}' on '{vm.name}' |
com.vmware.vim.vsm.dependency.destroy.vApp | Destroyed vService dependency '{dependencyName}' on vApp '{targetName}' |
com.vmware.vim.vsm.dependency.destroy.vm | Destroyed vService dependency '{dependencyName}' on '{vm.name}' |
com.vmware.vim.vsm.dependency.reconfigure.vApp | Reconfigured vService dependency '{dependencyName}' on vApp '{targetName}' |
com.vmware.vim.vsm.dependency.reconfigure.vm | Reconfigured vService dependency '{dependencyName}' on '{vm.name}' |
com.vmware.vim.vsm.dependency.unbind.vApp | vService dependency '{dependencyName}' on vApp '{targetName}' unbound from provider '{providerName}' |
com.vmware.vim.vsm.dependency.unbind.vm | vService dependency '{dependencyName}' on '{vm.name}' unbound from provider '{providerName}' |
com.vmware.vim.vsm.dependency.update.vApp | Updated vService dependency '{dependencyName}' on vApp '{targetName}' |
com.vmware.vim.vsm.dependency.update.vm | Updated vService dependency '{dependencyName}' on '{vm.name}' |
com.vmware.vim.vsm.provider.register | vService provider '{providerName}' with type '{providerType}' registered for extension '{extensionKey}' |
com.vmware.vim.vsm.provider.unregister | vService provider '{providerName}' with type '{providerType}' unregistered for extension '{extensionKey}' |
com.vmware.vim.vsm.provider.update | Updating vService provider '{providerName}' registered for extension '{extensionKey}' |
esx.audit.account.locked | Remote access for ESXi local user account '{1}' has been locked for {2} seconds after {3} failed login attempts. |
esx.audit.account.loginfailures | Multiple remote login failures detected for ESXi local user account '{1}'. |
esx.audit.dcui.login.failed | Authentication of user {1} has failed. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. |
esx.audit.dcui.login.passwd.changed | Login password for user {1} has been changed. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. |
esx.audit.dcui.network.restart | A management interface {1} has been restarted. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. |
esx.audit.esxcli.host.poweroff | The host is being powered off through esxcli. Reason for powering off: {1}. Please consult vSphere Documentation Center or follow the Ask VMware link for more information. |
esx.audit.esxcli.host.restart | The host is being rebooted through esxcli. Reason for reboot: {1}. Please consult vSphere Documentation Center or follow the Ask VMware link for more information. |
esx.audit.esximage.hostacceptance.changed | Host acceptance level changed from {1} to {2} |
esx.audit.esximage.install.securityalert | SECURITY ALERT: Installing image profile '{1}' with {2}. |
esx.audit.esximage.profile.install.successful | Successfully installed image profile '{1}'. Installed {2} VIB(s), removed {3} VIB(s). Please use 'esxcli software profile get' or see log for more detail about the transaction. |
esx.audit.esximage.profile.update.successful | Successfully updated host to image profile '{1}'. Installed {2} VIB(s), removed {3} VIB(s). Please use 'esxcli software profile get' or see log for more detail about the transaction. |
esx.audit.esximage.vib.install.successful | Successfully installed {1} VIB(s), removed {2} VIB(s). Please use 'esxcli software profile get' or see log for more detail about the transaction. |
esx.audit.esximage.vib.remove.successful | Successfully removed {1} VIB(s). Please use 'esxcli software profile get' or see log for more detail about the transaction. |
esx.audit.host.maxRegisteredVMsExceeded | The number of virtual machines registered on host {host.name} in cluster {computeResource.name} in {datacenter.name} exceeded limit: {current} registered, {limit} is the maximum supported. |
esx.audit.net.firewall.config.changed | Firewall configuration has changed. Operation '{1}' for rule set {2} succeeded. |
esx.audit.net.firewall.enabled | Firewall has been enabled for port {1}. |
esx.audit.net.firewall.port.hooked | Port {1} is now protected by Firewall. |
esx.audit.net.firewall.port.removed | Port {1} is no longer protected with Firewall. |
esx.audit.net.lacp.disable | LACP for VDS {1} is disabled. |
esx.audit.net.lacp.enable | LACP for VDS {1} is enabled. |
esx.audit.net.lacp.uplink.connected | LACP info: uplink {1} on VDS {2} got connected. |
esx.audit.uw.secpolicy.alldomains.level.changed | The enforcement level for all security domains has been changed to {1}. The enforcement level must always be set to enforcing. |
esx.audit.uw.secpolicy.domain.level.changed | The enforcement level for security domain {1} has been changed to {2}. The enforcement level must always be set to enforcing. |
esx.audit.vmfs.volume.mounted | File system {1} on volume {2} has been mounted in {3} mode on this host. |
esx.audit.vmfs.volume.umounted | The volume {1} has been safely un-mounted. The datastore is no longer accessible on this host. |
esx.clear.net.connectivity.restored | Network connectivity restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up. |
esx.clear.net.dvport.connectivity.restored | Network connectivity restored on DVPorts: {1}. Physical NIC {2} is up. |
esx.clear.net.dvport.redundancy.restored | Uplink redundancy restored on DVPorts: {1}. Physical NIC {2} is up. |
esx.clear.net.lacp.lag.transition.up | LACP info: LAG {1} on VDS {2} is up. |
esx.clear.net.lacp.uplink.transition.up | LACP info: uplink {1} on VDS {2} is moved into link aggregation group. |
esx.clear.net.lacp.uplink.unblocked | LACP info: uplink {1} on VDS {2} is unblocked. |
esx.clear.net.redundancy.restored | Uplink redundancy restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up. |
esx.clear.net.vmnic.linkstate.up | Physical NIC {1} linkstate is up. |
esx.clear.scsi.device.io.latency.improved | Device {1} performance has improved. I/O latency reduced from {2} microseconds to {3} microseconds. |
esx.clear.scsi.device.state.on | Device {1}, has been turned on administratively. |
esx.clear.scsi.device.state.permanentloss.deviceonline | Device {1}, that was permanently inaccessible is now online. No data consistency guarantees. |
esx.clear.storage.apd.exit | Device or filesystem with identifier {1} has exited the All Paths Down state. |
esx.clear.storage.connectivity.restored | Connectivity to storage device {1} (Datastores: {2}) restored. Path {3} is active again. |
esx.clear.storage.redundancy.restored | Path redundancy to storage device {1} (Datastores: {2}) restored. Path {3} is active again. |
esx.problem.3rdParty.error | A 3rd party component, {1}, running on ESXi has reported an error. Please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}. |
esx.problem.3rdParty.info | A 3rd party component, {1}, running on ESXi has reported an informational event. If needed, please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}. |
esx.problem.3rdParty.warning | A 3rd party component, {1}, running on ESXi has reported a warning related to a problem. Please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}. |
esx.problem.apei.bert.memory.error.corrected | A corrected memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10} |
esx.problem.apei.bert.memory.error.fatal | A fatal memory error occurred in the last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10} |
esx.problem.apei.bert.memory.error.recoverable | A recoverable memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10} |
esx.problem.apei.bert.pcie.error.corrected | A corrected PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}. |
esx.problem.apei.bert.pcie.error.fatal | Platform encounterd a fatal PCIe error in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}. |
esx.problem.apei.bert.pcie.error.recoverable | A recoverable PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}. |
esx.problem.application.core.dumped | An application ({1}) running on ESXi host has crashed ({2} time(s) so far). A core file might have been created at {3}. |
esx.problem.coredump.capacity.insufficient | The storage capacity of the coredump targets is insufficient to capture a complete coredump. Recommended coredump capacity is {1} MiB. |
esx.problem.coredump.copyspace | The free space available in default coredump copy location is insufficient to copy new coredumps. Recommended free space is {1} MiB. |
esx.problem.coredump.extraction.failed.nospace | The given partition has insufficient amount of free space to extract the coredump. At least {1} MiB is required. |
esx.problem.cpu.smp.ht.invalid | Disabling HyperThreading due to invalid configuration: Number of threads: {1}, Number of PCPUs: {2}. |
esx.problem.cpu.smp.ht.numpcpus.max | Found {1} PCPUs, but only using {2} of them due to specified limit. |
esx.problem.cpu.smp.ht.partner.missing | Disabling HyperThreading due to invalid configuration: HT partner {1} is missing from PCPU {2}. |
esx.problem.dhclient.lease.none | Unable to obtain a DHCP lease on interface {1}. |
esx.problem.dhclient.lease.offered.error | No expiry time on offered DHCP lease from {1}. |
esx.problem.esximage.install.error | Could not install image profile: {1} |
esx.problem.esximage.install.invalidhardware | Host doesn't meet image profile '{1}' hardware requirements: {2} |
esx.problem.esximage.install.stage.error | Could not stage image profile '{1}': {2} |
esx.problem.hardware.acpi.interrupt.routing.device.invalid | Skipping interrupt routing entry with bad device number: {1}. This is a BIOS bug. |
esx.problem.hardware.acpi.interrupt.routing.pin.invalid | Skipping interrupt routing entry with bad device pin: {1}. This is a BIOS bug. |
esx.problem.hardware.ioapic.missing | IOAPIC Num {1} is missing. Please check BIOS settings to enable this IOAPIC. |
esx.problem.hostd.core.dumped | {1} crashed ({2} time(s) so far) and a core file might have been created at {3}. This might have caused connections to the host to be dropped. |
esx.problem.iorm.badversion | Host {1} cannot participate in Storage I/O Control(SIOC) on datastore {2} because the version number {3} of the SIOC agent on this host is incompatible with number {4} of its counterparts on other hosts connected to this datastore. |
esx.problem.iorm.nonviworkload | An unmanaged I/O workload is detected on a SIOC-enabled datastore: {1}. |
esx.problem.migrate.vmotion.server.pending.cnx.listen.socket.shutdown | The ESXi host's vMotion network server encountered an error while monitoring incoming network connections. Shutting down listener socket. vMotion might not be possible with this host until vMotion is manually re-enabled. Failure status: {1} |
esx.problem.net.connectivity.lost | Lost network connectivity on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. |
esx.problem.net.dvport.connectivity.lost | Lost network connectivity on DVPorts: {1}. Physical NIC {2} is down. |
esx.problem.net.dvport.redundancy.degraded | Uplink redundancy degraded on DVPorts: {1}. Physical NIC {2} is down. |
esx.problem.net.dvport.redundancy.lost | Lost uplink redundancy on DVPorts: {1}. Physical NIC {2} is down. |
esx.problem.net.e1000.tso6.notsupported | Guest-initiated IPv6 TCP Segmentation Offload (TSO) packets ignored. Manually disable TSO inside the guest operating system in virtual machine {1}, or use a different virtual adapter. |
esx.problem.net.fence.port.badfenceid | VMkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: invalid fenceId. |
esx.problem.net.fence.resource.limited | Vmkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: maximum number of fence networks or ports have been reached. |
esx.problem.net.fence.switch.unavailable | Vmkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: dvSwitch fence property is not set. |
esx.problem.net.firewall.config.failed | Firewall configuration operation '{1}' failed. The changes were not applied to rule set {2}. |
esx.problem.net.firewall.port.hookfailed | Adding port {1} to Firewall failed. |
esx.problem.net.gateway.set.failed | Cannot connect to the specified gateway {1}. Failed to set it. |
esx.problem.net.heap.belowthreshold | {1} free size dropped below {2} percent. |
esx.problem.net.lacp.lag.transition.down | LACP warning: LAG {1} on VDS {2} is down. |
esx.problem.net.lacp.peer.noresponse | LACP error: No peer response on uplink {1} for VDS {2}. |
esx.problem.net.lacp.policy.incompatible | LACP error: Current teaming policy on VDS {1} is incompatible, supported is IP hash only. |
esx.problem.net.lacp.policy.linkstatus | LACP error: Current teaming policy on VDS {1} is incompatible, supported link failover detection is link status only. |
esx.problem.net.lacp.uplink.blocked | LACP warning: uplink {1} on VDS {2} is blocked. |
esx.problem.net.lacp.uplink.disconnected | LACP warning: uplink {1} on VDS {2} got disconnected. |
esx.problem.net.lacp.uplink.fail.duplex | LACP error: Duplex mode across all uplink ports must be full, VDS {1} uplink {2} has different mode. |
esx.problem.net.lacp.uplink.fail.speed | LACP error: Speed across all uplink ports must be same, VDS {1} uplink {2} has different speed. |
esx.problem.net.lacp.uplink.inactive | LACP error: All uplinks on VDS {1} must be active. |
esx.problem.net.lacp.uplink.transition.down | LACP warning: uplink {1} on VDS {2} is moved out of link aggregation group. |
esx.problem.net.migrate.bindtovmk | The ESX advanced configuration option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for improved performance. Update the configuration option with a valid vmknic. Alternatively, if you do not want vMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank. |
esx.problem.net.migrate.unsupported.latency | ESXi has detected {1}ms round-trip vMotion network latency between host {2} and {3}. High latency vMotion networks are supported only if both ESXi hosts have been configured for vMotion latency tolerance. |
esx.problem.net.portset.port.full | Portset {1} has reached the maximum number of ports ({2}). Cannot apply for any more free ports. |
esx.problem.net.portset.port.vlan.invalidid | {1} VLANID {2} is invalid. VLAN ID must be between 0 and 4095. |
esx.problem.net.proxyswitch.port.unavailable | Virtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. There are no more ports available on the host proxy switch. |
esx.problem.net.redundancy.degraded | Uplink redundancy degraded on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. |
esx.problem.net.redundancy.lost | Lost uplink redundancy on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. |
esx.problem.net.uplink.mtu.failed | VMkernel failed to set the MTU value {1} on the uplink {2}. |
esx.problem.net.vmknic.ip.duplicate | A duplicate IP address was detected for {1} on the interface {2}. The current owner is {3}. |
esx.problem.net.vmnic.linkstate.down | Physical NIC {1} linkstate is down. |
esx.problem.net.vmnic.linkstate.flapping | Taking down physical NIC {1} because the link is unstable. |
esx.problem.net.vmnic.watchdog.reset | Uplink {1} has recovered from a transient failure due to watchdog timeout |
esx.problem.ntpd.clock.correction.error | NTP daemon stopped. Time correction {1} > {2} seconds. Manually set the time and restart ntpd. |
esx.problem.pageretire.platform.retire.request | Memory page retirement requested by platform firmware. FRU ID: {1}. Refer to System Hardware Log: {2} |
esx.problem.pageretire.selectedmpnthreshold.host.exceeded | Number of host physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}). |
esx.problem.scratch.partition.size.small | Size of scratch partition {1} is too small. Recommended scratch partition size is {2} MiB. |
esx.problem.scratch.partition.unconfigured | No scratch partition has been configured. Recommended scratch partition size is {} MiB. |
esx.problem.scsi.device.close.failed | Failed to close the device {1} properly, plugin {2}. |
esx.problem.scsi.device.detach.failed | Detach failed for device :{1}. Exceeded the number of devices that can be detached, please cleanup stale detach entries. |
esx.problem.scsi.device.filter.attach.failed | Failed to attach filters to device '%s' during registration. Plugin load failed or the filter rules are incorrect. |
esx.problem.scsi.device.io.bad.plugin.type | Bad plugin type for device {1}, plugin {2} |
esx.problem.scsi.device.io.inquiry.failed | Failed to get standard inquiry for device {1} from Plugin {2}. |
esx.problem.scsi.device.io.latency.high | Device {1} performance has deteriorated. I/O latency increased from average value of {2} microseconds to {3} microseconds. |
esx.problem.scsi.device.io.qerr.change.config | QErr set to 0x{1} for device {2}. This may cause unexpected behavior. The system is not configured to change the QErr setting of device. The QErr value supported by system is 0x{3}. Please check the SCSI ChangeQErrSetting configuration value for ESX. |
esx.problem.scsi.device.io.qerr.changed | QErr set to 0x{1} for device {2}. This may cause unexpected behavior. The device was originally configured to the supported QErr setting of 0x{3}, but this has been changed and could not be changed back. |
esx.problem.scsi.device.is.local.failed | Failed to verify if the device {1} from plugin {2} is a local - not shared - device |
esx.problem.scsi.device.is.pseudo.failed | Failed to verify if the device {1} from plugin {2} is a pseudo device |
esx.problem.scsi.device.is.ssd.failed | Failed to verify if the device {1} from plugin {2} is a Solid State Disk device |
esx.problem.scsi.device.limitreached | The maximum number of supported devices of {1} has been reached. A device from plugin {2} could not be created. |
esx.problem.scsi.device.state.off | Device {1}, has been turned off administratively. |
esx.problem.scsi.device.state.permanentloss | Device {1} has been removed or is permanently inaccessible. Affected datastores (if any): {2}. |
esx.problem.scsi.device.state.permanentloss.noopens | Permanently inaccessible device {1} has no more opens. It is now safe to unmount datastores (if any) {2} and delete the device. |
esx.problem.scsi.device.state.permanentloss.pluggedback | Device {1} has been plugged back in after being marked permanently inaccessible. No data consistency guarantees. |
esx.problem.scsi.device.state.permanentloss.withreservationheld | Device {1} has been removed or is permanently inaccessible, while holding a reservation. Affected datastores (if any): {2}. |
esx.problem.scsi.device.thinprov.atquota | Space utilization on thin-provisioned device {1} exceeded configured threshold. Affected datastores (if any): {2}. |
esx.problem.scsi.scsipath.badpath.unreachpe | Sanity check failed for path {1}. The path is to a vVol PE, but it goes out of adapter {2} which is not PE capable. Path dropped. |
esx.problem.scsi.scsipath.badpath.unsafepe | Sanity check failed for path {1}. Could not safely determine if the path is to a vVol PE. Path dropped. |
esx.problem.scsi.scsipath.limitreached | The maximum number of supported paths of {1} has been reached. Path {2} could not be added. |
esx.problem.scsi.unsupported.plugin.type | Scsi Device Allocation not supported for plugin type {1} |
esx.problem.storage.apd.start | Device or filesystem with identifier {1} has entered the All Paths Down state. |
esx.problem.storage.apd.timeout | Device or filesystem with identifier {1} has entered the All Paths Down Timeout state after being in the All Paths Down state for {2} seconds. I/Os will now be fast failed. |
esx.problem.storage.connectivity.devicepor | Frequent PowerOn Reset Unit Attentions are occurring on device {1}. This might indicate a storage problem. Affected datastores: {2} |
esx.problem.storage.connectivity.lost | Lost connectivity to storage device {1}. Path {2} is down. Affected datastores: {3}. |
esx.problem.storage.connectivity.pathpor | Frequent PowerOn Reset Unit Attentions are occurring on path {1}. This might indicate a storage problem. Affected device: {2}. Affected datastores: {3} |
esx.problem.storage.connectivity.pathstatechanges | Frequent path state changes are occurring for path {1}. This might indicate a storage problem. Affected device: {2}. Affected datastores: {3} |
esx.problem.storage.iscsi.discovery.connect.error | iSCSI discovery to {1} on {2} failed. The iSCSI Initiator could not establish a network connection to the discovery address. |
esx.problem.storage.iscsi.discovery.login.error | iSCSI discovery to {1} on {2} failed. The Discovery target returned a login error of: {3}. |
esx.problem.storage.iscsi.target.connect.error | Login to iSCSI target {1} on {2} failed. The iSCSI initiator could not establish a network connection to the target. |
esx.problem.storage.iscsi.target.login.error | Login to iSCSI target {1} on {2} failed. Target returned login error of: {3}. |
esx.problem.storage.iscsi.target.permanently.lost | The iSCSI target {2} was permanently removed from {1}. |
esx.problem.storage.redundancy.degraded | Path redundancy to storage device {1} degraded. Path {2} is down. Affected datastores: {3}. |
esx.problem.storage.redundancy.lost | Lost path redundancy to storage device {1}. Path {2} is down. Affected datastores: {3}. |
esx.problem.vfat.filesystem.full.other | The VFAT filesystem {1} (UUID {2}) is full. |
esx.problem.vfat.filesystem.full.scratch | The host's scratch partition, which is the VFAT filesystem {1} (UUID {2}), is full. |
esx.problem.visorfs.inodetable.full | The root filesystem's file table is full. As a result, the file {1} could not be created by the application '{2}'. |
esx.problem.visorfs.ramdisk.full | The ramdisk '{1}' is full. As a result, the file {2} could not be written. |
esx.problem.visorfs.ramdisk.inodetable.full | The file table of the ramdisk '{1}' is full. As a result, the file {2} could not be created by the application '{3}'. |
esx.problem.vm.kill.unexpected.fault.failure | The VM using the config file {1} could not fault in a guest physical page from the hypervisor level swap file at {2}. The VM is terminated as further progress is impossible. |
esx.problem.vm.kill.unexpected.forcefulPageRetire | The VM using the config file {1} contains the host physical page {2} which was scheduled for immediate retirement. To avoid system instability the VM is forcefully powered off. |
esx.problem.vm.kill.unexpected.noSwapResponse | The VM using the config file {1} did not respond to {2} swap actions in {3} seconds and is forcefully powered off to prevent system instability. |
esx.problem.vm.kill.unexpected.vmtrack | The VM using the config file {1} is allocating too many pages while system is critically low in free memory. It is forcefully terminated to prevent system instability. |
esx.problem.vmfs.ats.incompatibility.detected | Multi-extent ATS-only volume '{1}' ({2}) is unable to use ATS because HardwareAcceleratedLocking is disabled on this host: potential for introducing filesystem corruption. Volume should not be used from other hosts. |
esx.problem.vmfs.ats.support.lost | ATS-Only VMFS volume '{1}' not mounted. Host does not support ATS or ATS initialization has failed. |
esx.problem.vmfs.error.volume.is.locked | Volume on device {1} is locked, possibly because some remote host encountered an error during a volume operation and could not recover. |
esx.problem.vmfs.extent.offline | An attached device {1} may be offline. The file system {2} is now in a degraded state. While the datastore is still available, parts of data that reside on the extent that went offline might be inaccessible. |
esx.problem.vmfs.extent.online | Device {1} backing file system {2} came online. This extent was previously offline. All resources on this device are now available. |
esx.problem.vmfs.heartbeat.recovered | Successfully restored access to volume {1} ({2}) following connectivity issues. |
esx.problem.vmfs.heartbeat.timedout | Lost access to volume {1} ({2}) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly. |
esx.problem.vmfs.heartbeat.unrecoverable | Lost connectivity to volume {1} ({2}) and subsequent recovery attempts have failed. |
esx.problem.vmfs.journal.createfailed | No space for journal on volume {1} ({2}). Volume will remain in read-only metadata mode with limited write support until journal can be created. |
esx.problem.vmfs.lock.corruptondisk | At least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume might be damaged too. |
esx.problem.vmfs.lockmode.inconsistency.detected | Inconsistent lockmode change detected for VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. Protocol error during ATS transition. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume. |
esx.problem.vmfs.nfs.server.disconnect | Lost connection to server {1} mount point {2} mounted as {3} ({4}). |
esx.problem.vmfs.nfs.server.restored | Restored connection to server {1} mount point {2} mounted as {3} ({4}). |
esx.problem.vmfs.resource.corruptondisk | At least one corrupt resource metadata region was detected on volume {1} ({2}). Other regions of the volume might be damaged too. |
esx.problem.vmfs.spanned.lockmode.inconsistency.detected | Inconsistent lockmode change detected for spanned VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. All operations on this volume will fail until this host unmounts and remounts the volume. |
esx.problem.vmfs.spanstate.incompatibility.detected | Incompatible span change detected for VMFS volume '{1} ({2})': volume was not spanned at time of open but now it is, and this host is using ATS-only lockmode but the volume is not ATS-only. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume. |
esx.problem.vmsyslogd.remote.failure | The host "{1}" has become unreachable. Remote logging to this host has stopped. |
esx.problem.vmsyslogd.storage.logdir.invalid | The configured log directory {1} cannot be used. The default directory {2} will be used instead. |
esx.problem.vmsyslogd.unexpected | Log daemon has failed for an unexpected reason: {1} |
esx.problem.vpxa.core.dumped | {1} crashed ({2} time(s) so far) and a core file might have been created at {3}. This might have caused connections to the host to be dropped. |
hbr.primary.AppQuiescedDeltaCompletedEvent | Application consistent sync completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred) |
hbr.primary.DeltaAbortedEvent | Sync aborted for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.hbr.primary.ReasonForDeltaAbort} |
hbr.primary.DeltaCompletedEvent | Sync completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred). |
hbr.primary.FSQuiescedDeltaCompletedEvent | File system consistent sync completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred) |
hbr.primary.FailedToStartDeltaEvent | Failed to start sync for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.fault.ReplicationVmFault.ReasonForFault} |
hbr.primary.FailedToStartSyncEvent | Failed to start full sync for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.fault.ReplicationVmFault.ReasonForFault} |
hbr.primary.InvalidDiskReplicationConfigurationEvent | Replication configuration is invalid for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}, disk {diskKey}: {reasonForFault.@enum.fault.ReplicationDiskConfigFault.ReasonForFault} |
hbr.primary.InvalidVmReplicationConfigurationEvent | Replication configuration is invalid for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reasonForFault.@enum.fault.ReplicationVmConfigFault.ReasonForFault} |
hbr.primary.NoConnectionToHbrServerEvent | No connection to VR Server for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.hbr.primary.ReasonForNoServerConnection} |
hbr.primary.NoProgressWithHbrServerEvent | VR Server error for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.hbr.primary.ReasonForNoServerProgress} |
hbr.primary.SyncCompletedEvent | Full sync completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred). |
hbr.primary.UnquiescedDeltaCompletedEvent | Quiescing failed or the virtual machine is powered off. Unquiesced crash consistent sync completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred). |
hbr.primary.VmReplicationConfigurationChangedEvent | Replication configuration changed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({numDisks} disks, {rpo} minutes RPO, VR Server is {vrServerAddress}:{vrServerPort}). |
vim.event.LicenseDowngradedEvent | License downgrade: {licenseKey} removes the following features: {lostFeatures} |
vprob.net.connectivity.lost | Lost network connectivity on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. |
vprob.net.e1000.tso6.notsupported | Guest-initiated IPv6 TCP Segmentation Offload (TSO) packets ignored. Manually disable TSO inside the guest operating system in virtual machine {1}, or use a different virtual adapter. |
vprob.net.migrate.bindtovmk | The ESX advanced config option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for improved performance. Please update the config option with a valid vmknic or, if you do not want vMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank. |
vprob.net.proxyswitch.port.unavailable | Virtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. There are no more ports available on the host proxy switch. |
vprob.net.redundancy.degraded | Uplink redundancy degraded on virtual switch {1}. Physical NIC {2} is down. {3} uplinks still up. Affected portgroups:{4}. |
vprob.net.redundancy.lost | Lost uplink redundancy on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. |
vprob.scsi.device.thinprov.atquota | Space utilization on thin-provisioned device {1} exceeded configured threshold. |
vprob.storage.connectivity.lost | Lost connectivity to storage device {1}. Path {2} is down. Affected datastores: {3}. |
vprob.storage.redundancy.degraded | Path redundancy to storage device {1} degraded. Path {2} is down. {3} remaining active paths. Affected datastores: {4}. |
vprob.storage.redundancy.lost | Lost path redundancy to storage device {1}. Path {2} is down. Affected datastores: {3}. |
vprob.vmfs.error.volume.is.locked | Volume on device {1} is locked, possibly because some remote host encountered an error during a volume operation and could not recover. |
vprob.vmfs.extent.offline | An attached device {1} might be offline. The file system {2} is now in a degraded state. While the datastore is still available, parts of data that reside on the extent that went offline might be inaccessible. |
vprob.vmfs.extent.online | Device {1} backing file system {2} came online. This extent was previously offline. All resources on this device are now available. |
vprob.vmfs.heartbeat.recovered | Successfully restored access to volume {1} ({2}) following connectivity issues. |
vprob.vmfs.heartbeat.timedout | Lost access to volume {1} ({2}) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly. |
vprob.vmfs.heartbeat.unrecoverable | Lost connectivity to volume {1} ({2}) and subsequent recovery attempts have failed. |
vprob.vmfs.journal.createfailed | No space for journal on volume {1} ({2}). Opening volume in read-only metadata mode with limited write support. |
vprob.vmfs.lock.corruptondisk | At least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume may be damaged too. |
vprob.vmfs.nfs.server.disconnect | Lost connection to server {1} mount point {2} mounted as {3} ({4}). |
vprob.vmfs.nfs.server.restored | Restored connection to server {1} mount point {2} mounted as {3} ({4}). |
vprob.vmfs.resource.corruptondisk | At least one corrupt resource metadata region was detected on volume {1} ({2}). Other regions of the volume might be damaged too. |
com.vmware.cl.CopyLibraryItemEvent | Copied Library Item {targetLibraryItemName} to Library {targetLibraryName}({targetLibraryId}). Source Library Item {sourceLibraryItemName}({sourceLibraryItemId}), source Library {sourceLibraryName}({sourceLibraryId}). |
com.vmware.cl.CopyLibraryItemFailEvent | Failed to copy Library Item {targetLibraryItemName}. |
com.vmware.cl.CreateLibraryEvent | Created Library {libraryName} |
com.vmware.cl.CreateLibraryFailEvent | Failed to create Library {libraryName} |
com.vmware.cl.CreateLibraryItemEvent | Created Library Item {libraryItemName} in Library {libraryName}({libraryId}). |
com.vmware.cl.CreateLibraryItemFailEvent | Failed to create Library Item {libraryItemName}. |
com.vmware.cl.DeleteLibraryEvent | Deleted Library {libraryName} |
com.vmware.cl.DeleteLibraryFailEvent | Failed to delete Library |
com.vmware.cl.DeleteLibraryItemEvent | Deleted Library Item {libraryItemName} in Library {libraryName}({libraryId}). |
com.vmware.cl.DeleteLibraryItemFailEvent | Failed to delete Library Item. |
com.vmware.cl.UpdateLibraryEvent | Updated Library {libraryName} |
com.vmware.cl.UpdateLibraryFailEvent | Failed to update Library |
com.vmware.cl.UpdateLibraryItemEvent | Updated Library Item {libraryItemName} in Library {libraryName}({libraryId}). |
com.vmware.cl.UpdateLibraryItemFailEvent | Failed to update Library Item. |
com.vmware.rbd.activateRuleSet | Activate Rule Set |
com.vmware.rbd.fdmPackageMissing | A host in a HA cluster does not have the 'vmware-fdm' package in its image profile |
com.vmware.rbd.hostProfileRuleAssocEvent | A host profile associated with one or more active rules was deleted. |
com.vmware.rbd.ignoreMachineIdentity | Ignoring the AutoDeploy.MachineIdentity event, since the host is already provisioned through Auto Deploy |
com.vmware.rbd.pxeBootNoImageRule | Unable to PXE boot host since it does not match any rules |
com.vmware.rbd.pxeBootUnknownHost | PXE Booting unknown host |
com.vmware.rbd.pxeProfileAssoc | Attach PXE Profile |
com.vmware.rbd.vmcaCertGenerationFailureEvent | Failed to generate host certificates using VMCA |
com.vmware.vim.eam.agency.create | {agencyName} created by {ownerName} |
com.vmware.vim.eam.agency.destroyed | {agencyName} removed from the vSphere ESX Agent Manager |
com.vmware.vim.eam.agency.goalstate | {agencyName} changed goal state from {oldGoalState} to {newGoalState} |
com.vmware.vim.eam.agency.statusChanged | Agency status changed from {oldStatus} to {newStatus} |
com.vmware.vim.eam.agency.updated | Configuration updated {agencyName} |
com.vmware.vim.eam.agent.created | Agent added to host {host.name} ({agencyName}) |
com.vmware.vim.eam.agent.destroyed | Agent removed from host {host.name} ({agencyName}) |
com.vmware.vim.eam.agent.destroyedNoHost | Agent removed from host ({agencyName}) |
com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterPowerOn | Agent VM {vm.name} has been powered on. Mark agent as available to proceed agent workflow ({agencyName}) |
com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterProvisioning | Agent VM {vm.name} has been provisioned. Mark agent as available to proceed agent workflow ({agencyName}) |
com.vmware.vim.eam.agent.statusChanged | Agent status changed from {oldStatus} to {newStatus} |
com.vmware.vim.eam.agent.task.deleteVm | Agent VM {vmName} is deleted on host {host.name} ({agencyName}) |
com.vmware.vim.eam.agent.task.deployVm | Agent VM {vm.name} is provisioned on host {host.name} ({agencyName}) |
com.vmware.vim.eam.agent.task.powerOffVm | Agent VM {vm.name} powered off, on host {host.name} ({agencyName}) |
com.vmware.vim.eam.agent.task.powerOnVm | Agent VM {vm.name} powered on, on host {host.name} ({agencyName}) |
com.vmware.vim.eam.agent.task.vibInstalled | Agent installed VIB {vib} on host {host.name} ({agencyName}) |
com.vmware.vim.eam.agent.task.vibUninstalled | Agent uninstalled VIB {vib} on host {host.name} ({agencyName}) |
com.vmware.vim.eam.issue.cannotAccessAgentOVF | Unable to access agent OVF package at {url} ({agencyName}) |
com.vmware.vim.eam.issue.cannotAccessAgentVib | Unable to access agent VIB module at {url} ({agencyName}) |
com.vmware.vim.eam.issue.hostInMaintenanceMode | Agent cannot complete an operation since the host {host.name} is in maintenance mode ({agencyName}) |
com.vmware.vim.eam.issue.hostInStandbyMode | Agent cannot complete an operation since the host {host.name} is in standby mode ({agencyName}) |
com.vmware.vim.eam.issue.hostPoweredOff | Agent cannot complete an operation since the host {host.name} is powered off ({agencyName}) |
com.vmware.vim.eam.issue.incompatibleHostVersion | Agent is not deployed due to incompatible host {host.name} ({agencyName}) |
com.vmware.vim.eam.issue.insufficientIpAddresses | Insufficient IP addresses in network protocol profile in agent's VM network ({agencyName}) |
com.vmware.vim.eam.issue.insufficientResources | Agent cannot be provisioned due to insufficient resources on host {host.name} ({agencyName}) |
com.vmware.vim.eam.issue.insufficientSpace | Agent on {host.name} cannot be provisioned due to insufficient space on datastore ({agencyName}) |
com.vmware.vim.eam.issue.missingAgentIpPool | No network protocol profile associated to agent's VM network ({agencyname}) |
com.vmware.vim.eam.issue.missingDvFilterSwitch | dvFilter switch is not configured on host {host.name} ({agencyname}) |
com.vmware.vim.eam.issue.noAgentVmDatastore | No agent datastore configuration on host {host.name} ({agencyName}) |
com.vmware.vim.eam.issue.noAgentVmNetwork | No agent network configuration on host {host.name} ({agencyName}) |
com.vmware.vim.eam.issue.noCustomAgentVmDatastore | Agent datastore(s) {customAgentVmDatastoreName} not available on host {host.name} ({agencyName}) |
com.vmware.vim.eam.issue.noCustomAgentVmNetwork | Agent network(s) {customAgentVmNetworkName} not available on host {host.name} ({agencyName}) |
com.vmware.vim.eam.issue.orphandedDvFilterSwitch | Unused dvFilter switch on host {host.name} ({agencyName}) |
com.vmware.vim.eam.issue.orphanedAgency | Orphaned agency found. ({agencyName}) |
com.vmware.vim.eam.issue.ovfInvalidFormat | OVF used to provision agent on host {host.name} has invalid format ({agencyName}) |
com.vmware.vim.eam.issue.ovfInvalidProperty | OVF environment used to provision agent on host {host.name} has one or more invalid properties ({agencyName}) |
com.vmware.vim.eam.issue.resolved | Issue {type} resolved (key {key}) |
com.vmware.vim.eam.issue.vibCannotPutHostInMaintenanceMode | Cannot put host into maintenance mode ({agencyName}) |
com.vmware.vim.eam.issue.vibInvalidFormat | Invalid format for VIB module at {url} ({agencyName}) |
com.vmware.vim.eam.issue.vibNotInstalled | VIB module for agent is not installed on host {host.name} ({agencyName}) |
com.vmware.vim.eam.issue.vibRequiresHostInMaintenanceMode | Host must be put into maintenance mode to complete agent VIB installation ({agencyName}) |
com.vmware.vim.eam.issue.vibRequiresHostReboot | Host {host.name} must be reboot to complete agent VIB installation ({agencyName}) |
com.vmware.vim.eam.issue.vibRequiresManualInstallation | VIB {vib} requires manual installation on host {host.name} ({agencyName}) |
com.vmware.vim.eam.issue.vibRequiresManualUninstallation | VIB {vib} requires manual uninstallation on host {host.name} ({agencyName}) |
com.vmware.vim.eam.issue.vmCorrupted | Agent VM {vm.name} on host {host.name} is corrupted ({agencyName}) |
com.vmware.vim.eam.issue.vmDeployed | Agent VM {vm.name} is provisioned on host {host.name} when it should be removed ({agencyName}) |
com.vmware.vim.eam.issue.vmMarkedAsTemplate | Agent VM {vm.name} on host {host.name} is marked as template ({agencyName}) |
com.vmware.vim.eam.issue.vmNotDeployed | Agent VM is missing on host {host.name} ({agencyName}) |
com.vmware.vim.eam.issue.vmOrphaned | Orphaned agent VM {vm.name} on host {host.name} detected ({agencyName}) |
com.vmware.vim.eam.issue.vmPoweredOff | Agent VM {vm.name} on host {host.name} is expected to be powered on ({agencyName}) |
com.vmware.vim.eam.issue.vmPoweredOn | Agent VM {vm.name} on host {host.name} is expected to be powered off ({agencyName}) |
com.vmware.vim.eam.issue.vmSuspended | Agent VM {vm.name} on host {host.name} is expected to be powered on but is suspended ({agencyName}) |
com.vmware.vim.eam.issue.vmWrongFolder | Agent VM {vm.name} on host {host.name} is in the wrong VM folder ({agencyName}) |
com.vmware.vim.eam.issue.vmWrongResourcePool | Agent VM {vm.name} on host {host.name} is in the resource pool ({agencyName}) |
com.vmware.vim.eam.login.succeeded | Successful login by {user} into vSphere ESX Agent Manager |
com.vmware.vim.eam.logout | User {user} logged out of vSphere ESX Agent Manager by logging out of the vCenter server |
com.vmware.vim.eam.task.setupDvFilter | DvFilter switch '{switchName}' is setup on host {host.name} |
com.vmware.vim.eam.task.tearDownDvFilter | DvFilter switch '{switchName}' is teared down on host {host.name} |
com.vmware.vim.eam.unauthorized.access | Unauthorized access by {user} in vSphere ESX Agent Manager |
com.vmware.vim.eam.vum.failedtouploadvib | Failed to upload {vibUrl} to VMware Update Manager ({agencyName}) |
ExitedStandbyModeEvent | The host {host.name} is no longer in standby mode |
ExitingStandbyModeEvent | The host {host.name} is exiting standby mode |
ExitMaintenanceModeEvent | Host {host.name} in {datacenter.name} has exited maintenance mode |
ExitStandbyModeFailedEvent | The host {host.name} could not exit standby mode |
ad.event.ImportCertEvent | Import certificate succeeded. |
ad.event.ImportCertFailedEvent | Import certificate failed. |
ad.event.JoinDomainEvent | Join domain succeeded. |
ad.event.JoinDomainFailedEvent | Join domain failed. |
ad.event.LeaveDomainEvent | Leave domain succeeded. |
ad.event.LeaveDomainFailedEvent | Leave domain failed. |
com.vmware.license.HostLicenseExpiredEvent | Expired host license or evaluation period. |
com.vmware.license.HostSubscriptionLicenseExpiredEvent | Expired host time-limited license. |
com.vmware.license.VcLicenseExpiredEvent | Expired vCenter Server license or evaluation period. |
com.vmware.license.VcSubscriptionLicenseExpiredEvent | Expired vCenter Server time-limited license. |
com.vmware.license.vsan.HostSsdOverUsageEvent | The capacity of the flash disks on the host exceeds the limit of the Virtual SAN license. |
com.vmware.license.vsan.LicenseExpiryEvent | Expired Virtual SAN license or evaluation period. |
com.vmware.license.vsan.SubscriptionLicenseExpiredEvent | Expired Virtual SAN time-limited license. |
com.vmware.vc.HA.AllHostAddrsPingable | The vSphere HA agent on the host {host.name} in cluster {computeResource.name} in {datacenter.name} can reach all the cluster management addresses |
com.vmware.vc.HA.AllIsoAddrsPingable | All vSphere HA isolation addresses are reachable by host {host.name} in cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.HA.AnsweredVmLockLostQuestionEvent | vSphere HA answered the lock-lost question on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} |
com.vmware.vc.HA.AnsweredVmTerminatePDLEvent | vSphere HA answered a question from host {host.name} in cluster {computeResource.name} about terminating virtual machine {vm.name} |
com.vmware.vc.HA.AutoStartDisabled | vSphere HA disabled the automatic Virtual Machine Startup/Shutdown feature on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Automatic VM restarts will interfere with HA when reacting to a host failure. |
com.vmware.vc.HA.CannotResetVmWithInaccessibleDatastore | vSphere HA did not reset VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} because the VM had files on inaccessible datastore(s) |
com.vmware.vc.HA.ClusterContainsIncompatibleHosts | vSphere HA Cluster {computeResource.name} in {datacenter.name} contains ESX/ESXi 3.5 hosts and more recent host versions, which isn't fully supported. |
com.vmware.vc.HA.ClusterFailoverActionCompletedEvent | vSphere HA completed a virtual machine failover action in cluster {computeResource.name} in datacenter {datacenter.name} |
com.vmware.vc.HA.ConnectedToMaster | vSphere HA agent on host {host.name} connected to the vSphere HA master on host {masterHostName} in cluster {computeResource.name} in datacenter {datacenter.name} |
com.vmware.vc.HA.CreateConfigVvolFailedEvent | vSphere HA failed to create a configuration vVol for this datastore and so will not be able to protect virtual machines on the datastore until the problem is resolved. Error: {fault} |
com.vmware.vc.HA.CreateConfigVvolSucceededEvent | vSphere HA successfully created a configuration vVol after the previous failure |
com.vmware.vc.HA.DasAgentRunningEvent | vSphere HA agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is running |
com.vmware.vc.HA.DasFailoverHostFailedEvent | vSphere HA detected a possible failure of failover host {host.name} in cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.HA.DasFailoverHostIsolatedEvent | Host {host.name} has been isolated from cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.HA.DasFailoverHostPartitionedEvent | Failover Host {host.name} in {computeResource.name} in {datacenter.name} is in a different network partition than the master |
com.vmware.vc.HA.DasFailoverHostUnreachableEvent | The vSphere HA agent on the failover host {host.name} in cluster {computeResource.name} in {datacenter.name} is not reachable but host responds to ICMP pings |
com.vmware.vc.HA.DasHostFailedEvent | vSphere HA detected a possible host failure of host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} |
com.vmware.vc.HA.DasHostIsolatedEvent | vSphere HA detected that host {host.name} is isolated from cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.HA.DasHostMonitoringDisabledEvent | vSphere HA host monitoring is disabled. No virtual machine failover will occur until Host Monitoring is re-enabled for cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.HA.FailedRestartAfterIsolationEvent | vSphere HA was unable to restart virtual machine {vm.name} in cluster {computeResource.name} in datacenter {datacenter.name} after it was powered off in response to a network isolation event. The virtual machine should be manually powered back on. |
com.vmware.vc.HA.HostDasAgentHealthyEvent | vSphere HA agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is healthy |
com.vmware.vc.HA.HostDoesNotSupportVsan | vSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} in {datacenter.name} because vCloud Distributed Storage is enabled but the host does not support that feature |
com.vmware.vc.HA.HostHasNoIsolationAddrsDefined | Host {host.name} in cluster {computeResource.name} in {datacenter.name} has no isolation addresses defined as required by vSphere HA. |
com.vmware.vc.HA.HostHasNoMountedDatastores | vSphere HA cannot be configured on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} because there are no mounted datastores. |
com.vmware.vc.HA.HostHasNoSslThumbprint | vSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} because its SSL thumbprint has not been verified. Check that vCenter Server is configured to verify SSL thumbprints and that the thumbprint for {host.name} has been verified. |
com.vmware.vc.HA.HostIncompatibleWithHA | The product version of host {host.name} in cluster {computeResource.name} in {datacenter.name} is incompatible with vSphere HA. |
com.vmware.vc.HA.HostPartitionedFromMasterEvent | vSphere HA detected that host {host.name} is in a different network partition than the master {computeResource.name} in {datacenter.name} |
com.vmware.vc.HA.HostUnconfigureError | There was an error unconfiguring the vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name}. To solve this problem, reconnect the host to vCenter Server. |
com.vmware.vc.HA.VMIsHADisabledIsolationEvent | vSphere HA did not perform an isolation response for {vm.name} in cluster {computeResource.name} in {datacenter.name} because its VM restart priority is Disabled |
com.vmware.vc.HA.VMIsHADisabledRestartEvent | vSphere HA did not attempt to restart {vm.name} in cluster {computeResource.name} in {datacenter.name} because its VM restart priority is Disabled |
com.vmware.vc.HA.VcCannotFindMasterEvent | vCenter Server is unable to find a master vSphere HA agent in cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.HA.VmDasResetAbortedEvent | vSphere HA was unable to reset VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} after {retryTimes} retries |
com.vmware.vc.HA.VmNotProtectedEvent | Virtual machine {vm.name} in cluster {computeResource.name} in {datacenter.name} failed to become vSphere HA Protected and HA may not attempt to restart it after a failure. |
com.vmware.vc.HA.VmProtectedEvent | Virtual machine {vm.name} in cluster {computeResource.name} in {datacenter.name} is vSphere HA Protected and HA will attempt to restart it after a failure. |
com.vmware.vc.HA.VmUnprotectedEvent | Virtual machine {vm.name} in cluster {computeResource.name} in {datacenter.name} is not vSphere HA Protected. |
com.vmware.vc.HA.VmUnprotectedOnDiskSpaceFull | vSphere HA has unprotected virtual machine {vm.name} in cluster {computeResource.name} in datacenter {datacenter.name} because it ran out of disk space |
com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore | vSphere HA did not terminate VM {vm.name} affected by an inaccessible datastore on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore} |
com.vmware.vc.HA.VmcpStorageFailureCleared | Datastore {ds.name} mounted on host {host.name} was inaccessible. The condition was cleared and the datastore is now accessible |
com.vmware.vc.HA.VmcpStorageFailureDetectedForVm | vSphere HA detected that a datastore mounted on host {host.name} in cluster {computeResource.name} in {datacenter.name} was inaccessible due to {failureType.@enum.com.vmware.vc.HA.VmcpStorageFailureDetectedForVm}. This affected VM {vm.name} with files on the datastore |
com.vmware.vc.HA.VmcpTerminateVmAborted | vSphere HA was unable to terminate VM {vm.name} affected by an inaccessible datastore on host {host.name} in cluster {computeResource.name} in {datacenter.name} after {retryTimes} retries |
com.vmware.vc.HA.VmcpTerminatingVm | vSphere HA attempted to terminate VM {vm.name} on host{host.name} in cluster {computeResource.name} in {datacenter.name} because the VM was affected by an inaccessible datastore |
com.vmware.vc.VmDiskConsolidatedEvent | Virtual machine {vm.name} disks consolidated successfully on {host.name} in cluster {computeResource.name} in {datacenter.name}. |
com.vmware.vc.VmDiskConsolidationNeeded | Virtual machine {vm.name} disks consolidation is needed on {host.name} in cluster {computeResource.name} in {datacenter.name}. |
com.vmware.vc.VmDiskConsolidationNoLongerNeeded | Virtual machine {vm.name} disks consolidation is no longer needed on {host.name} in cluster {computeResource.name} in {datacenter.name}. |
com.vmware.vc.VmDiskFailedToConsolidateEvent | Virtual machine {vm.name} disks consolidation failed on {host.name} in cluster {computeResource.name} in {datacenter.name}. |
com.vmware.vc.certmgr.HostCertManagementModeChangedEvent | Host Certificate Management Mode changed from {previousMode} to {presentMode} |
com.vmware.vc.certmgr.HostCertMetadataChangedEvent | Host Certificate Management Metadata changed |
com.vmware.vc.dvs.LacpConfigInconsistentEvent | Single Link Aggregation Control Group is enabled on Uplink Port Groups while enhanced LACP support is enabled. |
com.vmware.vc.ft.VmAffectedByDasDisabledEvent | vSphere HA has been disabled in cluster {computeResource.name} of datacenter {datacenter.name}. vSphere HA will not restart VM {vm.name} or its Secondary VM after a failure. |
com.vmware.vc.ha.VmRestartedByHAEvent | vSphere HA restarted virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} |
com.vmware.vc.host.AutoStartReconfigureFailedEvent | Reconfiguring autostart rules for virtual machines on {host.name} in datacenter {datacenter.name} failed |
com.vmware.vc.host.clear.vFlashResource.inaccessible | Host's virtual flash resource is restored to be accessible. |
com.vmware.vc.host.problem.DeprecatedVMFSVolumeFound | Deprecated VMFS volume(s) found on the host. Please consider upgrading volume(s) to the latest version. |
com.vmware.vc.host.problem.vFlashResource.inaccessible | Host's virtual flash resource is inaccessible. |
com.vmware.vc.host.vFlash.VFlashResourceCapacityExtendedEvent | Virtual flash resource capacity is extended |
com.vmware.vc.host.vFlash.VFlashResourceConfiguredEvent | Virtual flash resource is configured on the host |
com.vmware.vc.host.vFlash.VFlashResourceRemovedEvent | Virtual flash resource is removed from the host |
com.vmware.vc.host.vFlash.modulesLoadedEvent | Virtual flash modules are loaded or reloaded on the host |
com.vmware.vc.iofilter.FilterInstallationFailedEvent | vSphere APIs for I/O Filters (VAIO) installation of filters on cluster {computeResource.name} in datacenter {datacenter.name} has failed |
com.vmware.vc.iofilter.FilterInstallationSuccessEvent | vSphere APIs for I/O Filters (VAIO) installation of filters on cluster {computeResource.name} in datacenter {datacenter.name} is successful |
com.vmware.vc.iofilter.FilterUninstallationFailedEvent | vSphere APIs for I/O Filters (VAIO) uninstallation of filters on cluster {computeResource.name} in datacenter {datacenter.name} has failed |
com.vmware.vc.iofilter.FilterUninstallationSuccessEvent | vSphere APIs for I/O Filters (VAIO) uninstallation of filters on cluster {computeResource.name} in datacenter {datacenter.name} is successful |
com.vmware.vc.iofilter.FilterUpgradeFailedEvent | vSphere APIs for I/O Filters (VAIO) upgrade of filters on cluster {computeResource.name} in datacenter {datacenter.name} has failed |
com.vmware.vc.iofilter.FilterUpgradeSuccessEvent | vSphere APIs for I/O Filters (VAIO) upgrade of filters on cluster {computeResource.name} in datacenter {datacenter.name} has succeeded |
com.vmware.vc.iofilter.HostVendorProviderRegistrationSuccessEvent | vSphere APIs for I/O Filters (VAIO) vendor provider {host.name} has been successfully registered |
com.vmware.vc.iofilter.HostVendorProviderUnregistrationSuccessEvent | Failed to unregister vSphere APIs for I/O Filters (VAIO) vendor provider {host.name} |
com.vmware.vc.profile.AnswerFileExportedEvent | Answer file for host {host.name} in datacenter {datacenter.name} has been exported |
com.vmware.vc.profile.AnswerFileUpdatedEvent | Host customization settings for host {host.name} in datacenter {datacenter.name} has been updated |
com.vmware.vc.sdrs.CanceledDatastoreMaintenanceModeEvent | The datastore maintenance mode operation has been canceled |
com.vmware.vc.sdrs.ConfiguredStorageDrsOnPodEvent | Configured storage DRS on datastore cluster {objectName} |
com.vmware.vc.sdrs.ConsistencyGroupViolationEvent | Datastore cluster {objectName} has datastores that belong to different SRM Consistency Groups |
com.vmware.vc.sdrs.DatastoreEnteredMaintenanceModeEvent | Datastore {ds.name} has entered maintenance mode |
com.vmware.vc.sdrs.DatastoreEnteringMaintenanceModeEvent | Datastore {ds.name} is entering maintenance mode |
com.vmware.vc.sdrs.DatastoreExitedMaintenanceModeEvent | Datastore {ds.name} has exited maintenance mode |
com.vmware.vc.sdrs.DatastoreMaintenanceModeErrorsEvent | Datastore {ds.name} encountered errors while entering maintenance mode |
com.vmware.vc.sdrs.StorageDrsDisabledEvent | Disabled storage DRS on datastore cluster {objectName} |
com.vmware.vc.sdrs.StorageDrsInvocationFailedEvent | Storage DRS invocation failed on datastore cluster {objectName} |
com.vmware.vc.sdrs.StorageDrsNewRecommendationPendingEvent | A new storage DRS recommendation has been generated on datastore cluster {objectName} |
com.vmware.vc.sdrs.StorageDrsRecommendationApplied | All pending recommendations on datastore cluster {objectName} were applied |
com.vmware.vc.sms.VasaProviderRefreshCACertsAndCRLsSuccess | Refreshing CA certificates and CRLs succeeded for all registered VASA providers. |
com.vmware.vc.stats.HostQuickStatesNotUpToDateEvent | Quick stats on {host.name} in {computeResource.name} in {datacenter.name} is not up-to-date |
com.vmware.vc.vcp.FtDisabledVmTreatAsNonFtEvent | HA VM Component Protection protects virtual machine {vm.name} on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because the FT state is disabled |
com.vmware.vc.vcp.FtFailoverEvent | FT Primary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is going to fail over to Secondary VM due to component failure |
com.vmware.vc.vcp.FtFailoverFailedEvent | FT virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to failover to secondary |
com.vmware.vc.vcp.FtSecondaryRestartEvent | HA VM Component Protection is restarting FT secondary virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to component failure |
com.vmware.vc.vcp.FtSecondaryRestartFailedEvent | FT Secondary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restart |
com.vmware.vc.vcp.NeedSecondaryFtVmTreatAsNonFtEvent | HA VM Component Protection protects virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because it has been in the needSecondary state too long |
com.vmware.vc.vcp.VcpNoActionEvent | HA VM Component Protection did not take action on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to the feature configuration setting |
com.vmware.vc.vcp.VmRestartEvent | HA VM Component Protection is restarting virtual machine {vm.name} due to component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} |
com.vmware.vc.vcp.VmRestartFailedEvent | Virtual machine {vm.name} affected by component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restart |
com.vmware.vc.vm.PowerOnAfterCloneErrorEvent | Virtual machine {vm.name} failed to power on after cloning on host {host.name} in datacenter {datacenter.name} |
com.vmware.vc.vm.VmRegisterFailedEvent | Virtual machine {vm.name} registration on {host.name} in datacenter {datacenter.name} failed |
com.vmware.vc.vmam.AppMonitoringNotSupported | Application monitoring is not supported on {host.name} in cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.vmam.VmDasAppHeartbeatFailedEvent | vSphere HA detected application heartbeat failure for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} |
com.vmware.vc.vsan.ChecksumDisabledHostFoundEvent | Found a checksum disabled host {host.name} in a checksum protected vCenter Server cluster {computeResource.name} in datacenter {datacenter.name} |
com.vmware.vc.vsan.HostNotInClusterEvent | {host.name} with Virtual SAN service enabled is not in the vCenter cluster {computeResource.name} in datacenter {datacenter.name} |
com.vmware.vc.vsan.HostNotInVsanClusterEvent | {host.name} is in a Virtual SAN enabled cluster {computeResource.name} in datacenter {datacenter.name} but does not have Virtual SAN service enabled |
com.vmware.vc.vsan.HostVendorProviderDeregistrationSuccessEvent | Virtual SAN vendor provider {host.name} has been successfully unregistered |
com.vmware.vc.vsan.HostVendorProviderRegistrationSuccessEvent | Virtual SAN vendor provider {host.name} has been successfully registered |
com.vmware.vc.vsan.NetworkMisConfiguredEvent | Virtual SAN network is not configured on {host.name}, in cluster {computeResource.name}, and in datacenter {datacenter.name} |
esx.audit.dcui.defaults.factoryrestore | The host has been restored to default factory settings. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. |
esx.audit.dcui.disabled | The DCUI has been disabled. |
esx.audit.dcui.enabled | The DCUI has been enabled. |
esx.audit.dcui.host.reboot | The host is being rebooted through the Direct Console User Interface (DCUI). Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. |
esx.audit.dcui.host.shutdown | The host is being shut down through the Direct Console User Interface (DCUI). Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. |
esx.audit.dcui.hostagents.restart | The management agents on the host are being restarted. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. |
esx.audit.dcui.network.factoryrestore | The host has been restored to factory network settings. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. |
esx.audit.esximage.install.novalidation | Attempting to install an image profile with validation disabled. This may result in an image with unsatisfied dependencies, file or package conflicts, and potential security violations. |
esx.audit.host.boot | Host has booted. |
esx.audit.host.stop.reboot | Host is rebooting. |
esx.audit.host.stop.shutdown | Host is shutting down. |
esx.audit.lockdownmode.disabled | Administrator access to the host has been enabled. |
esx.audit.lockdownmode.enabled | Administrator access to the host has been disabled. |
esx.audit.lockdownmode.exceptions.changed | List of lockdown exception users has been changed. |
esx.audit.maintenancemode.canceled | The host has canceled entering maintenance mode. |
esx.audit.maintenancemode.entered | The host has entered maintenance mode. |
esx.audit.maintenancemode.entering | The host has begun entering maintenance mode. |
esx.audit.maintenancemode.exited | The host has exited maintenance mode. |
esx.audit.net.firewall.disabled | Firewall has been disabled. |
esx.audit.shell.disabled | The ESXi command line shell has been disabled. |
esx.audit.shell.enabled | The ESXi command line shell has been enabled. |
esx.audit.ssh.disabled | SSH access has been disabled. |
esx.audit.ssh.enabled | SSH access has been enabled. |
esx.audit.usb.config.changed | USB configuration has changed on host {host.name} in cluster {computeResource.name} in {datacenter.name}. |
esx.audit.vmfs.lvm.device.discovered | One or more LVM devices have been discovered on this host. |
esx.audit.vsan.clustering.enabled | Virtual SAN clustering and directory services have been enabled. |
esx.audit.vsan.net.vnic.added | Virtual SAN virtual NIC has been added. |
esx.clear.coredump.configured | A vmkcore disk partition is available and/or a network coredump server has been configured. Host core dumps will be saved. |
esx.clear.coredump.configured2 | At least one coredump target has been configured. Host core dumps will be saved. |
esx.problem.coredump.unconfigured | No vmkcore disk partition is available and no network coredump server has been configured. Host core dumps cannot be saved. |
esx.problem.coredump.unconfigured2 | No coredump target has been configured. Host core dumps cannot be saved. |
esx.problem.cpu.amd.mce.dram.disabled | DRAM ECC not enabled. Please enable it in BIOS. |
esx.problem.cpu.intel.ioapic.listing.error | Not all IO-APICs are listed in the DMAR. Not enabling interrupt remapping on this platform. |
esx.problem.cpu.mce.invalid | MCE monitoring will be disabled as an unsupported CPU was detected. Please consult the ESX HCL for information on supported hardware. |
esx.problem.host.coredump | An unread host kernel core dump has been found. |
esx.problem.migrate.vmotion.default.heap.create.failed | Failed to create default migration heap. This might be the result of severe host memory pressure or virtual address space exhaustion. Migration might still be possible, but will be unreliable in cases of extreme host memory pressure. |
esx.problem.scsi.apd.event.descriptor.alloc.failed | No memory to allocate APD (All Paths Down) event subsystem. |
esx.problem.scsi.device.io.invalid.disk.qfull.value | QFullSampleSize should be bigger than QFullThreshold. LUN queue depth throttling algorithm will not function as expected. Please set the QFullSampleSize and QFullThreshold disk configuration values in ESX correctly. |
esx.problem.syslog.config | System logging is not configured on host {host.name}. Please check Syslog options for the host under Configuration -> Software -> Advanced Settings in vSphere client. |
esx.problem.syslog.nonpersistent | System logs on host {host.name} are stored on non-persistent storage. Consult product documentation to configure a syslog server or a scratch partition. |
esx.problem.visorfs.failure | An operation on the root filesystem has failed. |
esx.problem.vmsyslogd.storage.failure | Logging to storage has failed. Logs are no longer being stored locally on this host. |
hbr.primary.ConnectionRestoredToHbrServerEvent | Connection to VR Server restored for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. |
hbr.primary.DeltaStartedEvent | Sync started by {userName} for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. |
hbr.primary.QuiesceNotSupported | Quiescing is not supported for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. |
hbr.primary.RpoOkForServerEvent | VR Server is compatible with support the configured RPO for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. |
hbr.primary.RpoTooLowForServerEvent | VR Server does not support the configured RPO for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. |
hbr.primary.SyncStartedEvent | Full sync started by {userName} for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. |
vim.event.SubscriptionLicenseExpiredEvent | The time-limited license on host {host.name} has expired. To comply with the EULA, renew the license at http://my.vmware.com |
com.vmware.vim.eam.issue.unknownAgentVm | Unknown agent VM {vm.name} |
com.vmware.vim.eam.login.invalid | Failed login to vSphere ESX Agent Manager |
com.vmware.vim.eam.task.scanForUnknownAgentVmsCompleted | Scan for unknown agent VMs completed |
com.vmware.vim.eam.task.scanForUnknownAgentVmsInitiated | Scan for unknown agent VMs initiated |
FailoverLevelRestored | Sufficient resources are available to satisfy vSphere HA failover level in cluster {computeResource.name} in {datacenter.name} |
GeneralEvent | General event: {message} |
GeneralHostErrorEvent | Error detected on {host.name} in {datacenter.name}: {message} |
GeneralHostInfoEvent | Issue detected on {host.name} in {datacenter.name}: {message} |
GeneralHostWarningEvent | Issue detected on {host.name} in {datacenter.name}: {message} |
GeneralUserEvent | User logged event: {message} |
GeneralVmErrorEvent | Error detected for {vm.name} on {host.name} in {datacenter.name}: {message} |
GeneralVmInfoEvent | Issue detected for {vm.name} on {host.name} in {datacenter.name}: {message} |
GeneralVmWarningEvent | Issue detected for {vm.name} on {host.name} in {datacenter.name}: {message} |
GhostDvsProxySwitchDetectedEvent | The vSphere Distributed Switch corresponding to the proxy switches {switchUuid} on the host {host.name} does not exist in vCenter Server or does not contain this host. |
GhostDvsProxySwitchRemovedEvent | A ghost proxy switch {switchUuid} on the host {host.name} was resolved. |
GlobalMessageChangedEvent | The message changed: {message} |
HealthStatusChangedEvent | {componentName} status changed from {oldStatus} to {newStatus} |
HostAddedEvent | Added host {host.name} to datacenter {datacenter.name} |
HostAddFailedEvent | Cannot add host {hostname} to datacenter {datacenter.name} |
HostAdminDisableEvent | Administrator access to the host {host.name} is disabled |
HostAdminEnableEvent | Administrator access to the host {host.name} has been restored |
HostCnxFailedAccountFailedEvent | Cannot connect {host.name} in {datacenter.name}: cannot configure management account |
HostCnxFailedAlreadyManagedEvent | Cannot connect {host.name} in {datacenter.name}: already managed by {serverName} |
HostCnxFailedBadCcagentEvent | Cannot connect host {host.name} in {datacenter.name} : server agent is not responding |
HostCnxFailedBadUsernameEvent | Cannot connect {host.name} in {datacenter.name}: incorrect user name or password |
HostCnxFailedBadVersionEvent | Cannot connect {host.name} in {datacenter.name}: incompatible version |
HostCnxFailedCcagentUpgradeEvent | Cannot connect host {host.name} in {datacenter.name}. Did not install or upgrade vCenter agent service. |
HostCnxFailedEvent | Cannot connect {host.name} in {datacenter.name}: error connecting to host |
HostCnxFailedNetworkErrorEvent | Cannot connect {host.name} in {datacenter.name}: network error |
HostCnxFailedNoAccessEvent | Cannot connect host {host.name} in {datacenter.name}: account has insufficient privileges |
HostCnxFailedNoConnectionEvent | Cannot connect host {host.name} in {datacenter.name} |
HostCnxFailedNoLicenseEvent | Cannot connect {host.name} in {datacenter.name}: not enough CPU licenses |
HostCnxFailedNotFoundEvent | Cannot connect {host.name} in {datacenter.name}: incorrect host name |
HostCnxFailedTimeoutEvent | Cannot connect {host.name} in {datacenter.name}: time-out waiting for host response |
HostComplianceCheckedEvent | Host {host.name} checked for compliance. |
HostCompliantEvent | Host {host.name} is in compliance with the attached profile |
HostConfigAppliedEvent | Host configuration changes applied. |
HostConnectedEvent | Connected to {host.name} in {datacenter.name} |
HostConnectionLostEvent | Host {host.name} in {datacenter.name} is not responding |
HostDasDisabledEvent | vSphere HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} is disabled |
HostDasDisablingEvent | vSphere HA is being disabled on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} |
HostDasEnabledEvent | vSphere HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} is enabled |
HostDasEnablingEvent | Enabling vSphere HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} |
HostDasErrorEvent | vSphere HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} has an error {message}: {reason.@enum.HostDasErrorEvent.HostDasErrorReason} |
HostDasOkEvent | vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name} is configured correctly |
HostDisconnectedEvent | Disconnected from {host.name} in {datacenter.name}. Reason: {reason.@enum.HostDisconnectedEvent.ReasonCode} |
HostDVPortEvent | dvPort connected to host {host.name} in {datacenter.name} changed status |
HostEnableAdminFailedEvent | Cannot restore some administrator permissions to the host {host.name} |
HostExtraNetworksEvent | Host {host.name} has the following extra networks not used by other hosts for vSphere HA communication:{ips}. Consider using vSphere HA advanced option das.allowNetwork to control network usage |
HostGetShortNameFailedEvent | Cannot complete command 'hostname -s' on host {host.name} or returned incorrect name format |
HostInAuditModeEvent | Host {host.name} is running in audit mode. The host's configuration will not be persistent across reboots. |
HostInventoryFullEvent | Maximum ({capacity}) number of hosts allowed for this edition of vCenter Server has been reached |
HostInventoryUnreadableEvent | The virtual machine inventory file on host {host.name} is damaged or unreadable. |
HostIpChangedEvent | IP address of the host {host.name} changed from {oldIP} to {newIP} |
HostIpInconsistentEvent | Configuration of host IP address is inconsistent on host {host.name}: address resolved to {ipAddress} and {ipAddress2} |
HostIpToShortNameFailedEvent | Cannot resolve IP address to short name on host {host.name} |
HostIsolationIpPingFailedEvent | vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name} could not reach isolation address: {isolationIp} |
HostLicenseExpiredEvent | A host license for {host.name} has expired |
HostLocalPortCreatedEvent | A host local port {hostLocalPort.portKey} is created on vSphere Distributed Switch {hostLocalPort.switchUuid} to recover from management network connectivity loss on virtual NIC device {hostLocalPort.vnic} on the host {host.name}. |
HostMissingNetworksEvent | Host {host.name} does not have the following networks used by other hosts for vSphere HA communication:{ips}. Consider using vSphere HA advanced option das.allowNetwork to control network usage |
HostMonitoringStateChangedEvent | vSphere HA host monitoring state in {computeResource.name} in {datacenter.name} changed to {state.@enum.DasConfigInfo.ServiceState} |
HostNoAvailableNetworksEvent | Host {host.name} in cluster {computeResource.name} in {datacenter.name} currently has no available networks for vSphere HA Communication. The following networks are currently used by HA: {ips} |
HostNoHAEnabledPortGroupsEvent | Host {host.name} in cluster {computeResource.name} in {datacenter.name} has no port groups enabled for vSphere HA communication. |
HostNonCompliantEvent | Host {host.name} is not in compliance with the attached profile |
HostNoRedundantManagementNetworkEvent | Host {host.name} in cluster {computeResource.name} in {datacenter.name} currently has no management network redundancy |
HostNotInClusterEvent | Host {host.name} is not a cluster member in {datacenter.name} |
HostOvercommittedEvent | Insufficient capacity in host {computeResource.name} to satisfy resource configuration in {datacenter.name} |
HostPrimaryAgentNotShortNameEvent | Primary agent {primaryAgent} was not specified as a short name to host {host.name} |
HostProfileAppliedEvent | Profile is applied on the host {host.name} |
HostReconnectionFailedEvent | Cannot reconnect to {host.name} in {datacenter.name} |
HostRemovedEvent | Removed host {host.name} in {datacenter.name} |
HostShortNameInconsistentEvent | Host names {shortName} and {shortName2} both resolved to the same IP address. Check the host's network configuration and DNS entries |
HostShortNameToIpFailedEvent | Cannot resolve short name {shortName} to IP address on host {host.name} |
HostShutdownEvent | Shut down of {host.name} in {datacenter.name}: {reason} |
HostStatusChangedEvent | Configuration status on host {computeResource.name} changed from {oldStatus.@enum.ManagedEntity.Status} to {newStatus.@enum.ManagedEntity.Status} in {datacenter.name} |
HostSyncFailedEvent | Cannot synchronize host {host.name}. {reason.msg} |
HostUpgradeFailedEvent | Cannot install or upgrade vCenter agent service on {host.name} in {datacenter.name} |
HostUserWorldSwapNotEnabledEvent | The userworld swap is not enabled on the host {host.name} |
HostVnicConnectedToCustomizedDVPortEvent | Host {host.name} vNIC {vnic.vnic} was reconfigured to use dvPort {vnic.port.portKey} with port level configuration, which might be different from the dvPort group. |
HostWwnChangedEvent | WWNs are changed for {host.name} |
HostWwnConflictEvent | The WWN ({wwn}) of {host.name} conflicts with the currently registered WWN |
IncorrectHostInformationEvent | Host {host.name} did not provide the information needed to acquire the correct set of licenses |
InfoUpgradeEvent | {message} |
InsufficientFailoverResourcesEvent | Insufficient resources to satisfy vSphere HA failover level on cluster {computeResource.name} in {datacenter.name} |
InvalidEditionEvent | The license edition '{feature}' is invalid |
IScsiBootFailureEvent | Booting from iSCSI failed with an error. See the VMware Knowledge Base for information on configuring iBFT networking. |
LicenseExpiredEvent | License {feature.featureName} has expired |
LicenseNonComplianceEvent | License inventory is not compliant. Licenses are overused |
LicenseRestrictedEvent | Unable to acquire licenses due to a restriction in the option file on the license server. |
LicenseServerAvailableEvent | License server {licenseServer} is available |
LicenseServerUnavailableEvent | License server {licenseServer} is unavailable |
LocalDatastoreCreatedEvent | Created local datastore {datastore.name} on {host.name} in {datacenter.name} |
LocalTSMEnabledEvent | ESXi Shell for the host {host.name} has been enabled |
LockerMisconfiguredEvent | Datastore {datastore} which is configured to back the locker does not exist |
LockerReconfiguredEvent | Locker was reconfigured from {oldDatastore} to {newDatastore} datastore |
MigrationErrorEvent | Unable to migrate {vm.name} from {host.name} in {datacenter.name}: {fault.msg} |
MigrationHostErrorEvent | Unable to migrate {vm.name} from {host.name} to {dstHost.name} in {datacenter.name}: {fault.msg} |
MigrationHostWarningEvent | Migration of {vm.name} from {host.name} to {dstHost.name} in {datacenter.name}: {fault.msg} |
MigrationResourceErrorEvent | Cannot migrate {vm.name} from {host.name} to {dstHost.name} and resource pool {dstPool.name} in {datacenter.name}: {fault.msg} |
MigrationResourceWarningEvent | Migration of {vm.name} from {host.name} to {dstHost.name} and resource pool {dstPool.name} in {datacenter.name}: {fault.msg} |
MigrationWarningEvent | Migration of {vm.name} from {host.name} in {datacenter.name}: {fault.msg} |
MtuMatchEvent | The MTU configured in the vSphere Distributed Switch matches the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name} |
MtuMismatchEvent | The MTU configured in the vSphere Distributed Switch does not match the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name} |
NASDatastoreCreatedEvent | Created NAS datastore {datastore.name} on {host.name} in {datacenter.name} |
NetworkRollbackEvent | Network configuration on the host {host.name} is rolled back as it disconnects the host from vCenter server. |
NoAccessUserEvent | Cannot login user {userName}@{ipAddress}: no permission |
NoDatastoresConfiguredEvent | No datastores have been configured on the host {host.name} |
NoLicenseEvent | A required license {feature.featureName} is not reserved |
NoMaintenanceModeDrsRecommendationForVM | Unable to automatically migrate {vm.name} from {host.name} |
NonVIWorkloadDetectedOnDatastoreEvent | An unmanaged I/O workload is detected on a SIOC-enabled datastore: {datastore.name}. |
NotEnoughResourcesToStartVmEvent | Insufficient resources to fail over {vm.name} in {computeResource.name} that recides in {datacenter.name}. vSphere HA will retry the fail over when enough resources are available. Reason: {reason.@enum.fdm.placementFault} |
OutOfSyncDvsHost | The vSphere Distributed Switch configuration on some hosts differed from that of the vCenter Server. |
PermissionAddedEvent | Permission created for {principal} on {entity.name}, role is {role.name}, propagation is {propagate.@enum.auth.Permission.propagate} |
PermissionRemovedEvent | Permission rule removed for {principal} on {entity.name} |
PermissionUpdatedEvent | Permission changed for {principal} on {entity.name}, role is {role.name}, propagation is {propagate.@enum.auth.Permission.propagate} |
ProfileAssociatedEvent | Profile {profile.name} attached. |
ProfileChangedEvent | Profile {profile.name} was changed. |
ProfileCreatedEvent | Profile is created. |
ProfileDissociatedEvent | Profile {profile.name} detached. |
ProfileReferenceHostChangedEvent | Profile {profile.name} reference host changed. |
ProfileRemovedEvent | Profile was removed. |
RecoveryEvent | The host {hostName} network connectivity was recovered on the management virtual NIC {vnic} by connecting to a new port {portKey} on the vSphere Distributed Switch {dvsUuid}. |
RemoteTSMEnabledEvent | SSH for the host {host.name} has been enabled |
ResourcePoolCreatedEvent | Created resource pool {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name} |
ResourcePoolDestroyedEvent | Removed resource pool {resourcePool.name} on {computeResource.name} in {datacenter.name} |
ResourcePoolMovedEvent | Moved resource pool {resourcePool.name} from {oldParent.name} to {newParent.name} on {computeResource.name} in {datacenter.name} |
ResourcePoolReconfiguredEvent | Updated configuration for {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name} |
ResourceViolatedEvent | Resource usage exceeds configuration for resource pool {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name} |
RoleAddedEvent | New role {role.name} created |
RoleRemovedEvent | Role {role.name} removed |
RoleUpdatedEvent | Modified role {role.name} |
RollbackEvent | The Network API {methodName} on this entity caused the host {hostName} to be disconnected from the vCenter Server. The configuration change was rolled back on the host. |
ScheduledTaskCompletedEvent | Task {scheduledTask.name} on {entity.name} in datacenter {datacenter.name} completed successfully |
ScheduledTaskCreatedEvent | Created task {scheduledTask.name} on {entity.name} in datacenter {datacenter.name} |
ScheduledTaskEmailCompletedEvent | Task {scheduledTask.name} on {entity.name} in datacenter {datacenter.name} sent email to {to} |
ScheduledTaskEmailFailedEvent | Task {scheduledTask.name} on {entity.name} in datacenter {datacenter.name} cannot send email to {to}: {reason.msg} |
ScheduledTaskFailedEvent | Task {scheduledTask.name} on {entity.name} in datacenter {datacenter.name} cannot be completed: {reason.msg} |
ScheduledTaskReconfiguredEvent | Reconfigured task {scheduledTask.name} on {entity.name} in datacenter {datacenter.name} |
ScheduledTaskRemovedEvent | Removed task {scheduledTask.name} on {entity.name} in datacenter {datacenter.name} |
ScheduledTaskStartedEvent | Running task {scheduledTask.name} on {entity.name} in datacenter {datacenter.name} |
ServerLicenseExpiredEvent | A vCenter Server license has expired |
ServerStartedSessionEvent | vCenter started |
SessionTerminatedEvent | A session for user '{terminatedUsername}' has stopped |
TaskEvent | Task: {info.descriptionId} |
TaskTimeoutEvent | Task: {info.descriptionId} time-out |
TeamingMatchEvent | Teaming configuration in the vSphere Distributed Switch {dvs.name} on host {host.name} matches the physical switch configuration in {datacenter.name}. Detail: {healthResult.summary.@enum.dvs.VmwareDistributedVirtualSwitch.TeamingMatchStatus} |
TeamingMisMatchEvent | Teaming configuration in the vSphere Distributed Switch {dvs.name} on host {host.name} does not match the physical switch configuration in {datacenter.name}. Detail: {healthResult.summary.@enum.dvs.VmwareDistributedVirtualSwitch.TeamingMatchStatus} |
TemplateBeingUpgradedEvent | Upgrading template {legacyTemplate} |
TemplateUpgradedEvent | Template {legacyTemplate} upgrade completed |
TemplateUpgradeFailedEvent | Cannot upgrade template {legacyTemplate} due to: {reason.msg} |
TimedOutHostOperationEvent | The operation performed on {host.name} in {datacenter.name} timed out |
UnlicensedVirtualMachinesEvent | There are {unlicensed} unlicensed virtual machines on host {host} - there are only {available} licenses available |
UnlicensedVirtualMachinesFoundEvent | {unlicensed} unlicensed virtual machines found on host {host} |
UpdatedAgentBeingRestartedEvent | The agent on host {host.name} is updated and will soon restart |
UplinkPortMtuNotSupportEvent | Not all VLAN MTU settings on the external physical switch allow the vSphere Distributed Switch maximum MTU size packets to pass on the uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}. |
UplinkPortMtuSupportEvent | All VLAN MTU settings on the external physical switch allow the vSphere Distributed Switch maximum MTU size packets to pass on the uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}. |
UplinkPortVlanTrunkedEvent | The configured VLAN in the vSphere Distributed Switch was trunked by the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}. |
UplinkPortVlanUntrunkedEvent | Not all the configured VLANs in the vSphere Distributed Switch were trunked by the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}. |
UserAssignedToGroup | User {userLogin} was added to group {group} |
UserLoginSessionEvent | User {userName}@{ipAddress} logged in as {userAgent} |
UserLogoutSessionEvent | User {userName}@{ipAddress} logged out (login time: {loginTime}, number of API invocations: {callCount}, user agent: {userAgent}) |
UserPasswordChanged | Password was changed for account {userLogin} on host {host.name} |
UserUnassignedFromGroup | User {userLogin} removed from group {group} |
UserUpgradeEvent | {message} |
VcAgentUninstalledEvent | vCenter agent has been uninstalled from {host.name} in {datacenter.name} |
VcAgentUninstallFailedEvent | Cannot uninstall vCenter agent from {host.name} in {datacenter.name}. {reason.@enum.fault.AgentInstallFailed.Reason} |
VcAgentUpgradedEvent | vCenter agent has been upgraded on {host.name} in {datacenter.name} |
VcAgentUpgradeFailedEvent | Cannot upgrade vCenter agent on {host.name} in {datacenter.name}. {reason.@enum.fault.AgentInstallFailed.Reason} |
VimAccountPasswordChangedEvent | VIM account password was changed on host {host.name} |
VmAcquiredMksTicketEvent | Remote console to {vm.name} on {host.name} in {datacenter.name} has been opened |
VmAcquiredTicketEvent | A ticket for {vm.name} of type {ticketType.@enum.VirtualMachine.TicketType} on {host.name} in {datacenter.name} has been acquired |
VmAutoRenameEvent | Invalid name for {vm.name} on {host.name} in {datacenter.name}. Renamed from {oldName} to {newName} |
VmBeingClonedEvent | Cloning {vm.name} on host {host.name} in {datacenter.name} to {destName} on host {destHost.name} |
VmBeingClonedNoFolderEvent | Cloning {vm.name} on host {host.name} in {datacenter.name} to {destName} on host {destHost.name} to a vApp |
VmBeingCreatedEvent | Creating {vm.name} on host {host.name} in {datacenter.name} |
VmBeingDeployedEvent | Deploying {vm.name} on host {host.name} in {datacenter.name} from template {srcTemplate.name} |
VmBeingHotMigratedEvent | Migrating {vm.name} from {host.name}, {ds.name} to {destHost.name}, {destDatastore.name} in {datacenter.name} |
VmBeingMigratedEvent | Relocating {vm.name} from {host.name}, {ds.name} in {datacenter.name} to {destHost.name}, {destDatastore.name} in {destDatacenter.name} |
VmBeingRelocatedEvent | Relocating {vm.name} in {datacenter.name} from {host.name}, {ds.name} to {destHost.name}, {destDatastore.name} |
VmClonedEvent | Clone of {sourceVm.name} completed |
VmCloneFailedEvent | Cannot clone {vm.name}: {reason.msg} |
VmConfigMissingEvent | Configuration file for {vm.name} on {host.name} in {datacenter.name} cannot be found |
VmConnectedEvent | Virtual machine {vm.name} is connected |
VmCreatedEvent | Created virtual machine {vm.name} on {host.name} in {datacenter.name} |
VmDasBeingResetEvent | {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} reset by vSphere HA. Reason: {reason.@enum.VmDasBeingResetEvent.ReasonCode} |
VmDasBeingResetWithScreenshotEvent | {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} reset by vSphere HA. Reason: {reason.@enum.VmDasBeingResetEvent.ReasonCode}. A screenshot is saved at {screenshotFilePath}. |
VmDasResetFailedEvent | vSphere HA cannot reset {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} |
VmDasUpdateErrorEvent | Unable to update vSphere HA agents given the state of {vm.name} |
VmDasUpdateOkEvent | vSphere HA agents have been updated with the current state of the virtual machine |
VmDateRolledBackEvent | Disconnecting all hosts as the date of virtual machine {vm.name} has been rolled back |
VmDeployedEvent | Template {srcTemplate.name} deployed on host {host.name} |
VmDeployFailedEvent | Cannot deploy template: {reason.msg} |
VmDisconnectedEvent | {vm.name} on host {host.name} in {datacenter.name} is disconnected |
VmDiscoveredEvent | Discovered {vm.name} on {host.name} in {datacenter.name} |
VmDiskFailedEvent | Cannot create virtual disk {disk} |
VmDVPortEvent | dvPort connected to VM {vm.name} on {host.name} in {datacenter.name} changed status |
VmEmigratingEvent | Migrating {vm.name} off host {host.name} in {datacenter.name} |
VmEndRecordingEvent | End a recording session on {vm.name} |
VmEndReplayingEvent | End a replay session on {vm.name} |
VmFailedMigrateEvent | Cannot migrate {vm.name} from {host.name}, {ds.name} to {destHost.name}, {destDatastore.name} in {datacenter.name} |
VmFailedRelayoutEvent | Cannot complete relayout {vm.name} on {host.name} in {datacenter.name}: {reason.msg} |
VmFailedRelayoutOnVmfs2DatastoreEvent | Cannot complete relayout for virtual machine {vm.name} which has disks on a VMFS2 volume. |
VmFailedStartingSecondaryEvent | vCenter cannot start the Fault Tolerance secondary VM for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Reason: {reason.@enum.VmFailedStartingSecondaryEvent.FailureReason} |
VmFailedToPowerOffEvent | Cannot power off {vm.name} on {host.name} in {datacenter.name}: {reason.msg} |
VmFailedToPowerOnEvent | Cannot power on {vm.name} on {host.name} in {datacenter.name}. {reason.msg} |
VmFailedToRebootGuestEvent | Cannot reboot the guest OS for {vm.name} on {host.name} in {datacenter.name}. {reason.msg} |
VmFailedToResetEvent | Cannot suspend {vm.name} on {host.name} in {datacenter.name}: {reason.msg} |
VmFailedToShutdownGuestEvent | {vm.name} cannot shut down the guest OS on {host.name} in {datacenter.name}: {reason.msg} |
VmFailedToStandbyGuestEvent | {vm.name} cannot standby the guest OS on {host.name} in {datacenter.name}: {reason.msg} |
VmFailedToSuspendEvent | Cannot suspend {vm.name} on {host.name} in {datacenter.name}: {reason.msg} |
VmFailedUpdatingSecondaryConfig | vCenter cannot update the Fault Tolerance secondary VM configuration for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} |
VmFailoverFailed | vSphere HA unsuccessfully failed over {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}. vSphere HA will retry if the maximum number of attempts has not been exceeded. Reason: {reason.msg} |
VmFaultToleranceStateChangedEvent | Fault Tolerance state of {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} changed from {oldState.@enum.VirtualMachine.FaultToleranceState} to {newState.@enum.VirtualMachine.FaultToleranceState} |
VmFaultToleranceTurnedOffEvent | Fault Tolerance protection has been turned off for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} |
VmFaultToleranceVmTerminatedEvent | The Fault Tolerance VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} has been terminated. {reason.@enum.VmFaultToleranceVmTerminatedEvent.TerminateReason} |
VMFSDatastoreCreatedEvent | Created VMFS datastore {datastore.name} on {host.name} in {datacenter.name} |
VMFSDatastoreExpandedEvent | Expanded VMFS datastore {datastore.name} on {host.name} in {datacenter.name} |
VMFSDatastoreExtendedEvent | Extended VMFS datastore {datastore.name} on {host.name} in {datacenter.name} |
VmGuestOSCrashedEvent | {vm.name} on {host.name}: Guest operating system has crashed. |
VmGuestRebootEvent | Guest OS reboot for {vm.name} on {host.name} in {datacenter.name} |
VmGuestShutdownEvent | Guest OS shut down for {vm.name} on {host.name} in {datacenter.name} |
VmGuestStandbyEvent | Guest OS standby for {vm.name} on {host.name} in {datacenter.name} |
VmHealthMonitoringStateChangedEvent | vSphere HA VM monitoring state in {computeResource.name} in {datacenter.name} changed to {state.@enum.DasConfigInfo.VmMonitoringState} |
VmInstanceUuidAssignedEvent | Assign a new instance UUID ({instanceUuid}) to {vm.name} |
VmInstanceUuidChangedEvent | The instance UUID of {vm.name} has been changed from ({oldInstanceUuid}) to ({newInstanceUuid}) |
VmInstanceUuidConflictEvent | The instance UUID ({instanceUuid}) of {vm.name} conflicts with the instance UUID assigned to {conflictedVm.name} |
VmMacAssignedEvent | New MAC address ({mac}) assigned to adapter {adapter} for {vm.name} |
VmMacChangedEvent | Changed MAC address from {oldMac} to {newMac} for adapter {adapter} for {vm.name} |
VmMacConflictEvent | The MAC address ({mac}) of {vm.name} conflicts with MAC assigned to {conflictedVm.name} |
VmMaxFTRestartCountReached | vSphere HA stopped trying to restart Secondary VM {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} because the maximum VM restart count was reached |
VmMaxRestartCountReached | vSphere HA stopped trying to restart {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} because the maximum VM restart count was reached |
VmMessageErrorEvent | Error message on {vm.name} on {host.name} in {datacenter.name}: {message} |
VmMessageEvent | Message on {vm.name} on {host.name} in {datacenter.name}: {message} |
VmMessageWarningEvent | Warning message on {vm.name} on {host.name} in {datacenter.name}: {message} |
VmMigratedEvent | Migration of virtual machine {vm.name} from {sourceHost.name}, {sourceDatastore.name} to {host.name}, {ds.name} completed |
VmNoCompatibleHostForSecondaryEvent | No compatible host for the Fault Tolerance secondary VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} |
VmNoNetworkAccessEvent | Not all networks for {vm.name} are accessible by {destHost.name} |
VmOrphanedEvent | {vm.name} does not exist on {host.name} in {datacenter.name} |
VMotionLicenseExpiredEvent | A vMotion license for {host.name} has expired |
VmPoweredOffEvent | {vm.name} on {host.name} in {datacenter.name} is powered off |
VmPoweredOnEvent | {vm.name} on {host.name} in {datacenter.name} is powered on |
VmPoweringOnWithCustomizedDVPortEvent | Virtual machine {vm.name} powered On with vNICs connected to dvPorts that have a port level configuration, which might be different from the dvPort group configuration. |
VmPowerOffOnIsolationEvent | vSphere HA powered off {vm.name} on the isolated host {isolatedHost.name} in cluster {computeResource.name} in {datacenter.name} |
VmPrimaryFailoverEvent | Fault Tolerance VM ({vm.name}) failed over to {host.name} in cluster {computeResource.name} in {datacenter.name}. {reason.@enum.VirtualMachine.NeedSecondaryReason} |
VmReconfiguredEvent | Reconfigured {vm.name} on {host.name} in {datacenter.name} |
VmRegisteredEvent | Registered {vm.name} on {host.name} in {datacenter.name} |
VmRelayoutSuccessfulEvent | Relayout of {vm.name} on {host.name} in {datacenter.name} completed |
VmRelayoutUpToDateEvent | {vm.name} on {host.name} in {datacenter.name} is in the correct format and relayout is not necessary |
VmReloadFromPathEvent | {vm.name} on {host.name} reloaded from new configuration {configPath}. |
VmReloadFromPathFailedEvent | {vm.name} on {host.name} could not be reloaded from {configPath}. |
VmRelocatedEvent | Completed the relocation of the virtual machine |
VmRelocateFailedEvent | Cannot relocate virtual machine '{vm.name}' in {datacenter.name} |
VmRemoteConsoleConnectedEvent | Remote console connected to {vm.name} on host {host.name} |
VmRemoteConsoleDisconnectedEvent | Remote console disconnected from {vm.name} on host {host.name} |
VmRemovedEvent | Removed {vm.name} on {host.name} from {datacenter.name} |
VmRenamedEvent | Renamed {vm.name} from {oldName} to {newName} in {datacenter.name} |
VmRequirementsExceedCurrentEVCModeEvent | Feature requirements of {vm.name} exceed capabilities of {host.name}'s current EVC mode. |
VmResettingEvent | {vm.name} on {host.name} in {datacenter.name} is reset |
VmResourcePoolMovedEvent | Moved {vm.name} from resource pool {oldParent.name} to {newParent.name} in {datacenter.name} |
VmResourceReallocatedEvent | Changed resource allocation for {vm.name} |
VmRestartedOnAlternateHostEvent | Virtual machine {vm.name} was restarted on {host.name} since {sourceHost.name} failed |
VmResumingEvent | {vm.name} on {host.name} in {datacenter.name} is resumed |
VmSecondaryAddedEvent | A Fault Tolerance secondary VM has been added for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} |
VmSecondaryDisabledBySystemEvent | vCenter disabled Fault Tolerance on VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} because the Secondary VM could not be powered On. |
VmSecondaryDisabledEvent | Disabled Fault Tolerance secondary VM for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} |
VmSecondaryEnabledEvent | Enabled Fault Tolerance secondary VM for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} |
VmSecondaryStartedEvent | Started Fault Tolerance secondary VM for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} |
VmShutdownOnIsolationEvent | vSphere HA shut down {vm.name} was shut down on the isolated host {isolatedHost.name} in cluster {computeResource.name} in {datacenter.name}: {shutdownResult.@enum.VmShutdownOnIsolationEvent.Operation} |
VmStartingEvent | {vm.name} on host {host.name} in {datacenter.name} is starting |
VmStartingSecondaryEvent | Starting Fault Tolerance secondary VM for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} |
VmStartRecordingEvent | Start a recording session on {vm.name} |
VmStartReplayingEvent | Start a replay session on {vm.name} |
VmStaticMacConflictEvent | The static MAC address ({mac}) of {vm.name} conflicts with MAC assigned to {conflictedVm.name} |
VmStoppingEvent | {vm.name} on {host.name} in {datacenter.name} is stopping |
VmSuspendedEvent | {vm.name} on {host.name} in {datacenter.name} is suspended |
VmSuspendingEvent | {vm.name} on {host.name} in {datacenter.name} is being suspended |
VmTimedoutStartingSecondaryEvent | Starting the Fault Tolerance secondary VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} timed out within {timeout} ms |
VmUnsupportedStartingEvent | Unsupported guest OS {guestId} for {vm.name} on {host.name} in {datacenter.name} |
VmUpgradeCompleteEvent | Virtual machine compatibility upgraded to {version.@enum.vm.hwVersion} |
VmUpgradeFailedEvent | Cannot upgrade virtual machine compatibility. |
VmUpgradingEvent | Upgrading virtual machine compatibility of {vm.name} in {datacenter.name} to {version.@enum.vm.hwVersion} |
VmUuidAssignedEvent | Assigned new BIOS UUID ({uuid}) to {vm.name} on {host.name} in {datacenter.name} |
VmUuidChangedEvent | Changed BIOS UUID from {oldUuid} to {newUuid} for {vm.name} on {host.name} in {datacenter.name} |
VmUuidConflictEvent | BIOS ID ({uuid}) of {vm.name} conflicts with that of {conflictedVm.name} |
VmVnicPoolReservationViolationClearEvent | The reservation violation on the virtual NIC network resource pool {vmVnicResourcePoolName} with key {vmVnicResourcePoolKey} on {dvs.name} is cleared |
VmVnicPoolReservationViolationRaiseEvent | The reservation allocated to the virtual NIC network resource pool {vmVnicResourcePoolName} with key {vmVnicResourcePoolKey} on {dvs.name} is violated |
VmWwnAssignedEvent | New WWNs assigned to {vm.name} |
VmWwnChangedEvent | WWNs are changed for {vm.name} |
VmWwnConflictEvent | The WWN ({wwn}) of {vm.name} conflicts with the currently registered WWN |