Skip to content

vCenter Events

The following table contains a list of all VMware vCenter Server Events.

KeyMessage
KeyMessage
AccountCreatedEventAccount {spec.id} was created on host {host.name}
AccountRemovedEventAccount {account} was removed on host {host.name}
AccountUpdatedEventAccount {spec.id} was updated on host {host.name}
AdminPasswordNotChangedEventThe default password for the root user on the host {host.name} has not been changed
AlarmAcknowledgedEventAcknowledged alarm '{alarm.name}' on {entity.name}
AlarmActionTriggeredEventAlarm '{alarm.name}' on {entity.name} triggered an action
AlarmClearedEventManually cleared alarm '{alarm.name}' on {entity.name} from {from.@enum.ManagedEntity.Status}
AlarmCreatedEventCreated alarm '{alarm.name}' on {entity.name}
AlarmEmailCompletedEventAlarm '{alarm.name}' on {entity.name} sent email to {to}
AlarmEmailFailedEventAlarm '{alarm.name}' on {entity.name} cannot send email to {to}
AlarmReconfiguredEventReconfigured alarm '{alarm.name}' on {entity.name}
AlarmRemovedEventRemoved alarm '{alarm.name}' on {entity.name}
AlarmScriptCompleteEventAlarm '{alarm.name}' on {entity.name} ran script {script}
AlarmScriptFailedEventAlarm '{alarm.name}' on {entity.name} did not complete script: {reason.msg}
AlarmSnmpCompletedEventAlarm '{alarm.name}': an SNMP trap for entity {entity.name} was sent
AlarmSnmpFailedEventAlarm '{alarm.name}' on entity {entity.name} did not send SNMP trap: {reason.msg}
AlarmStatusChangedEventAlarm '{alarm.name}' on {entity.name} changed from {from.@enum.ManagedEntity.Status} to {to.@enum.ManagedEntity.Status}
AllVirtualMachinesLicensedEventAll running virtual machines are licensed
AlreadyAuthenticatedSessionEventUser cannot logon since the user is already logged on
BadUsernameSessionEventCannot login {userName}@{ipAddress}
CanceledHostOperationEventThe operation performed on host {host.name} in {datacenter.name} was canceled
ChangeOwnerOfFileEventChanged ownership of file name {filename} from {oldOwner} to {newOwner} on {host.name} in {datacenter.name}.
ChangeOwnerOfFileFailedEventCannot change ownership of file name {filename} from {owner} to {attemptedOwner} on {host.name} in {datacenter.name}.
ClusterComplianceCheckedEventChecked cluster for compliance
ClusterCreatedEventCreated cluster {computeResource.name} in {datacenter.name}
ClusterDestroyedEventRemoved cluster {computeResource.name} in datacenter {datacenter.name}
ClusterOvercommittedEventInsufficient capacity in cluster {computeResource.name} to satisfy resource configuration in {datacenter.name}
ClusterReconfiguredEventReconfigured cluster {computeResource.name} in datacenter {datacenter.name}
ClusterStatusChangedEventConfiguration status on cluster {computeResource.name} changed from {oldStatus.@enum.ManagedEntity.Status} to {newStatus.@enum.ManagedEntity.Status} in {datacenter.name}
CustomFieldDefAddedEventCreated new custom field definition {name}
CustomFieldDefRemovedEventRemoved field definition {name}
CustomFieldDefRenamedEventRenamed field definition from {name} to {newName}
CustomFieldValueChangedEventChanged custom field {name} on {entity.name} in {datacenter.name} to {value}
CustomizationFailedCannot complete customization of VM {vm.name}. See customization log at {logLocation} on the guest OS for details.
CustomizationLinuxIdentityFailedAn error occurred while setting up Linux identity. See log file '{logLocation}' on guest OS for details.
CustomizationNetworkSetupFailedAn error occurred while setting up network properties of the guest OS. See the log file {logLocation} in the guest OS for details.
CustomizationStartedEventStarted customization of VM {vm.name}. Customization log located at {logLocation} in the guest OS.
CustomizationSucceededCustomization of VM {vm.name} succeeded. Customization log located at {logLocation} in the guest OS.
CustomizationSysprepFailedThe version of Sysprep {sysprepVersion} provided for customizing VM {vm.name} does not match the version of guest OS {systemVersion}. See the log file {logLocation} in the guest OS for more information.
CustomizationUnknownFailureAn error occurred while customizing VM {vm.name}. For details reference the log file {logLocation} in the guest OS.
DasAdmissionControlDisabledEventvSphere HA admission control disabled for cluster {computeResource.name} in {datacenter.name}
DasAdmissionControlEnabledEventvSphere HA admission control enabled for cluster {computeResource.name} in {datacenter.name}
DasAgentFoundEventRe-established contact with a primary host in this vSphere HA cluster
DasAgentUnavailableEventUnable to contact a primary vSphere HA agent in cluster {computeResource.name} in {datacenter.name}
DasClusterIsolatedEventAll hosts in the vSphere HA cluster {computeResource.name} in {datacenter.name} were isolated from the network. Check the network configuration for proper network redundancy in the management network.
DasDisabledEventvSphere HA disabled for cluster {computeResource.name} in {datacenter.name}
DasEnabledEventvSphere HA enabled for cluster {computeResource.name} in {datacenter.name}
DasHostFailedEventA possible host failure has been detected by vSphere HA on {failedHost.name} in cluster {computeResource.name} in {datacenter.name}
DasHostIsolatedEventHost {isolatedHost.name} has been isolated from cluster {computeResource.name} in {datacenter.name}
DatacenterCreatedEventCreated datacenter {datacenter.name} in folder {parent.name}
DatacenterRenamedEventRenamed datacenter from {oldName} to {newName}
DatastoreCapacityIncreasedEventDatastore {datastore.name} increased in capacity from {oldCapacity} bytes to {newCapacity} bytes in {datacenter.name}
DatastoreDestroyedEventRemoved unconfigured datastore {datastore.name}
DatastoreDiscoveredEventDiscovered datastore {datastore.name} on {host.name} in {datacenter.name}
DatastoreDuplicatedEventMultiple datastores named {datastore} detected on host {host.name} in {datacenter.name}
DatastoreFileCopiedEventFile or directory {sourceFile} copied from {sourceDatastore.name} to {datastore.name} as {targetFile}
DatastoreFileDeletedEventFile or directory {targetFile} deleted from {datastore.name}
DatastoreFileMovedEventFile or directory {sourceFile} moved from {sourceDatastore.name} to {datastore.name} as {targetFile}
DatastoreIORMReconfiguredEventReconfigured Storage I/O Control on datastore {datastore.name}
DatastorePrincipalConfiguredConfigured datastore principal {datastorePrincipal} on host {host.name} in {datacenter.name}
DatastoreRemovedOnHostEventRemoved datastore {datastore.name} from {host.name} in {datacenter.name}
DatastoreRenamedEventRenamed datastore from {oldName} to {newName} in {datacenter.name}
DatastoreRenamedOnHostEventRenamed datastore from {oldName} to {newName} in {datacenter.name}
DrsDisabledEventDisabled DRS on cluster {computeResource.name} in datacenter {datacenter.name}
DrsEnabledEventEnabled DRS on {computeResource.name} with automation level {behavior} in {datacenter.name}
DrsEnteredStandbyModeEventDRS put {host.name} into standby mode
DrsEnteringStandbyModeEventDRS is putting {host.name} into standby mode
DrsExitedStandbyModeEventDRS moved {host.name} out of standby mode
DrsExitingStandbyModeEventDRS is moving {host.name} out of standby mode
DrsExitStandbyModeFailedEventDRS cannot move {host.name} out of standby mode
DrsInvocationFailedEventDRS invocation not completed
DrsRecoveredFromFailureEventDRS has recovered from the failure
DrsResourceConfigureFailedEventUnable to apply DRS resource settings on host {host.name} in {datacenter.name}. {reason.msg}. This can significantly reduce the effectiveness of DRS.
DrsResourceConfigureSyncedEventResource configuration specification returns to synchronization from previous failure on host '{host.name}' in {datacenter.name}
DrsRuleComplianceEvent{vm.name} on {host.name} in {datacenter.name} is now compliant with DRS VM-Host affinity rules
DrsRuleViolationEvent{vm.name} on {host.name} in {datacenter.name} is violating a DRS VM-Host affinity rule
DrsSoftRuleViolationEvent{vm.name} on {host.name} in {datacenter.name} is violating a DRS VM-Host soft affinity rule
DrsVmMigratedEventDRS migrated {vm.name} from {sourceHost.name} to {host.name} in cluster {computeResource.name} in {datacenter.name}
DrsVmPoweredOnEventDRS powered On {vm.name} on {host.name} in {datacenter.name}
DuplicateIpDetectedEventVirtual machine {macAddress} on host {host.name} has a duplicate IP {duplicateIP}
DvpgImportEventImport operation with type {importType} was performed on {net.name}
DvpgRestoreEventRestore operation was performed on {net.name}
DVPortgroupCreatedEventdvPort group {net.name} in {datacenter.name} was added to switch {dvs.name}.
DVPortgroupDestroyedEventdvPort group {net.name} in {datacenter.name} was deleted.
DVPortgroupEvent
DVPortgroupReconfiguredEventdvPort group {net.name} in {datacenter.name} was reconfigured.
DVPortgroupRenamedEventdvPort group {oldName} in {datacenter.name} was renamed to {newName}
DvsCreatedEventA vSphere Distributed Switch {dvs.name} was created in {datacenter.name}.
DvsDestroyedEventvSphere Distributed Switch {dvs.name} in {datacenter.name} was deleted.
DvsEventvSphere Distributed Switch event
DvsHealthStatusChangeEventHealth check status was changed in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}
DvsHostBackInSyncEventThe vSphere Distributed Switch {dvs.name} configuration on the host was synchronized with that of the vCenter Server.
DvsHostJoinedEventThe host {hostJoined.name} joined the vSphere Distributed Switch {dvs.name} in {datacenter.name}.
DvsHostLeftEventThe host {hostLeft.name} left the vSphere Distributed Switch {dvs.name} in {datacenter.name}.
DvsHostStatusUpdatedThe host {hostMember.name} changed status on the vSphere Distributed Switch {dvs.name} in {datacenter.name}.
DvsHostWentOutOfSyncEventThe vSphere Distributed Switch {dvs.name} configuration on the host differed from that of the vCenter Server.
DvsImportEventImport operation with type {importType} was performed on {dvs.name}
DvsMergedEventvSphere Distributed Switch {srcDvs.name} was merged into {dstDvs.name} in {datacenter.name}.
DvsPortBlockedEventThe dvPort {portKey} was blocked in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.
DvsPortConnectedEventThe dvPort {portKey} was connected in the vSphere Distributed Switch {dvs.name} in {datacenter.name}
DvsPortCreatedEventNew ports were created in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.
DvsPortDeletedEventDeleted ports in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.
DvsPortDisconnectedEventThe dvPort {portKey} was disconnected in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.
DvsPortEnteredPassthruEventThe dvPort {portKey} was in passthrough mode in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.
DvsPortExitedPassthruEventThe dvPort {portKey} was not in passthrough mode in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.
DvsPortJoinPortgroupEventThe dvPort {portKey} was moved into the dvPort group {portgroupName} in {datacenter.name}.
DvsPortLeavePortgroupEventThe dvPort {portKey} was moved out of the dvPort group {portgroupName} in {datacenter.name}.
DvsPortLinkDownEventThe dvPort {portKey} link was down in the vSphere Distributed Switch {dvs.name} in {datacenter.name}
DvsPortLinkUpEventThe dvPort {portKey} link was up in the vSphere Distributed Switch {dvs.name} in {datacenter.name}
DvsPortReconfiguredEventReconfigured ports in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.
DvsPortRuntimeChangeEventThe dvPort {portKey} runtime information changed in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.
DvsPortUnblockedEventThe dvPort {portKey} was unblocked in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.
DvsPortVendorSpecificStateChangeEventThe dvPort {portKey} vendor specific state changed in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.
DvsReconfiguredEventThe vSphere Distributed Switch {dvs.name} in {datacenter.name} was reconfigured.
DvsRenamedEventThe vSphere Distributed Switch {oldName} in {datacenter.name} was renamed to {newName}.
DvsRestoreEventRestore operation was performed on {dvs.name}
DvsUpgradeAvailableEvent An upgrade for the vSphere Distributed Switch {dvs.name} in datacenter {datacenter.name} is available.
DvsUpgradedEventvSphere Distributed Switch {dvs.name} in datacenter {datacenter.name} was upgraded.
DvsUpgradeInProgressEventAn upgrade for the vSphere Distributed Switch {dvs.name} in datacenter {datacenter.name} is in progress.
DvsUpgradeRejectedEventCannot complete an upgrade for the vSphere Distributed Switch {dvs.name} in datacenter {datacenter.name}
EnteredMaintenanceModeEventHost {host.name} in {datacenter.name} has entered maintenance mode
EnteredStandbyModeEventThe host {host.name} is in standby mode
EnteringMaintenanceModeEventHost {host.name} in {datacenter.name} has started to enter maintenance mode
EnteringStandbyModeEventThe host {host.name} is entering standby mode
ErrorUpgradeEvent{message}
com.vmware.license.AddLicenseEventLicense {licenseKey} added to VirtualCenter
com.vmware.license.AssignLicenseEventLicense {licenseKey} assigned to asset {entityName} with id {entityId}
com.vmware.license.DLFDownloadFailedEventFailed to download license information from the host {hostname} due to {errorReason}
com.vmware.license.LicenseAssignFailedEventLicense assignment on the host fails. Reasons: {errorMessage.@enum.com.vmware.license.LicenseAssignError}.
com.vmware.license.LicenseCapacityExceededEventThe current license usage ({currentUsage} {costUnitText}) for {edition} exceeds the license capacity ({capacity} {costUnitText})
com.vmware.license.LicenseExpiryEventYour host license expires in {remainingDays} days. The host will disconnect from vCenter Server when its license expires.
com.vmware.license.LicenseUserThresholdExceededEventThe current license usage ({currentUsage} {costUnitText}) for {edition} exceeds the user-defined threshold ({threshold} {costUnitText})
com.vmware.license.RemoveLicenseEventLicense {licenseKey} removed from VirtualCenter
com.vmware.license.UnassignLicenseEventLicense unassigned from asset {entityName} with id {entityId}
com.vmware.pbm.profile.associateAssociated storage policy: {ProfileId} with entity: {EntityId}
com.vmware.pbm.profile.deleteDeleted storage policy: {ProfileId}
com.vmware.pbm.profile.dissociateDissociated storage policy: {ProfileId} from entity: {EntityId}
com.vmware.pbm.profile.updateNameStorage policy name updated for {ProfileId}. New name: {NewProfileName}
com.vmware.vc.HA.ClusterFailoverActionInitiatedEventvSphere HA initiated a failover action on {pendingVms} virtual machines in cluster {computeResource.name} in datacenter {datacenter.name}
com.vmware.vc.HA.ClusterFailoverInProgressEventvSphere HA failover operation in progress in cluster {computeResource.name} in datacenter {datacenter.name}: {numBeingPlaced} VMs being restarted, {numToBePlaced} VMs waiting for a retry, {numAwaitingResource} VMs waiting for resources, {numAwaitingVsanVmChange} inaccessible Virtual SAN VMs
com.vmware.vc.HA.DasHostCompleteDatastoreFailureEventAll shared datastores failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.HA.DasHostCompleteNetworkFailureEventAll VM networks failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.HA.HeartbeatDatastoreChangedDatastore {dsName} is {changeType.@enum.com.vmware.vc.HA.HeartbeatDatastoreChange} for storage heartbeating monitored by the vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.HA.HeartbeatDatastoreNotSufficientThe number of vSphere HA heartbeat datastores for host {host.name} in cluster {computeResource.name} in {datacenter.name} is {selectedNum}, which is less than required: {requiredNum}
com.vmware.vc.HA.HostAgentErrorEventvSphere HA agent for host {host.name} has an error in {computeResource.name} in {datacenter.name}: {reason.@enum.com.vmware.vc.HA.HostAgentErrorReason}
com.vmware.vc.HA.HostDasErrorEventvSphere HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} has an error: {reason.@enum.HostDasErrorEvent.HostDasErrorReason}
com.vmware.vc.HA.HostStateChangedEventThe vSphere HA availability state of the host {host.name} in cluster in {computeResource.name} in {datacenter.name} has changed to {newState.@enum.com.vmware.vc.HA.DasFdmAvailabilityState}
com.vmware.vc.HA.HostUnconfiguredWithProtectedVmsHost {host.name} in cluster {computeResource.name} in {datacenter.name} is disconnected from vCenter Server, but contains {protectedVmCount} protected virtual machine(s)
com.vmware.vc.HA.InvalidMastervSphere HA agent on host {remoteHostname} is an invalid master. The host should be examined to determine if it has been compromised.
com.vmware.vc.HA.NotAllHostAddrsPingableThe vSphere HA agent on the host {host.name} in cluster {computeResource.name} in {datacenter.name} cannot reach some of the management network addresses of other hosts, and thus HA may not be able to restart VMs if a host failure occurs: {unpingableAddrs}
com.vmware.vc.HA.StartFTSecondaryFailedEventvSphere HA agent failed to start Fault Tolerance secondary VM {secondaryCfgPath} on host {secondaryHost} for primary VM {vm.name} in cluster {computeResource.name} in {datacenter.name}. Reason : {fault.msg}. vSphere HA agent will retry until it times out.
com.vmware.vc.HA.StartFTSecondarySucceededEventvSphere HA agent successfully started Fault Tolerance secondary VM {secondaryCfgPath} on host {secondaryHost} for primary VM {vm.name} in cluster {computeResource.name}.
com.vmware.vc.HA.UserHeartbeatDatastoreRemovedvSphere HA removed datastore {dsName} from the set of preferred heartbeat datastores selected for cluster {computeResource.name} in {datacenter.name} because the datastore is removed from inventory
com.vmware.vc.HA.VcCannotCommunicateWithMasterEventvCenter Server cannot communicate with the master vSphere HA agent on {hostname} in cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.HA.VcConnectedToMasterEventvCenter Server is connected to a master HA agent running on host {hostname} in {computeResource.name} in {datacenter.name}
com.vmware.vc.HA.VcDisconnectedFromMasterEventvCenter Server is disconnected from a master HA agent running on host {hostname} in {computeResource.name} in {datacenter.name}
com.vmware.vc.VCHealthStateChangedEventvCenter Service overall health changed from '{oldState}' to '{newState}'
com.vmware.vc.VmCloneFailedInvalidDestinationEventCannot clone {vm.name} as {destVmName} to invalid or non-existent destination with ID {invalidMoRef}: {fault}
com.vmware.vc.VmCloneToResourcePoolFailedEventCannot clone {vm.name} as {destVmName} to resource pool {destResourcePool}: {fault}
com.vmware.vc.certmgr.HostCaCertsAndCrlsUpdatedEventCA Certificates were updated on {hostname}
com.vmware.vc.certmgr.HostCertExpirationImminentEventHost Certificate expiration is imminent on {hostname}. Expiration Date: {expiryDate}
com.vmware.vc.certmgr.HostCertExpiringEventHost Certificate on {hostname} is nearing expiration. Expiration Date: {expiryDate}
com.vmware.vc.certmgr.HostCertExpiringShortlyEventHost Certificate on {hostname} will expire soon. Expiration Date: {expiryDate}
com.vmware.vc.certmgr.HostCertRevokedEventHost Certificate on {hostname} is revoked.
com.vmware.vc.certmgr.HostCertUpdatedEventHost Certificate was updated on {hostname}, new thumbprint: {thumbprint}
com.vmware.vc.certmgr.HostMgmtAgentsRestartedEventManagement Agents were restarted on {hostname}
com.vmware.vc.cim.CIMGroupHealthStateChangedHealth of {data.group} changed from {data.oldState} to {data.newState}. {data.cause}
com.vmware.vc.datastore.UpdateVmFilesFailedEventFailed to update VM files on datastore {ds.name} using host {hostName}
com.vmware.vc.datastore.UpdatedVmFilesEventUpdated VM files on datastore {ds.name} using host {hostName}
com.vmware.vc.datastore.UpdatingVmFilesEventUpdating VM files on datastore {ds.name} using host {hostName}
com.vmware.vc.guestOperations.GuestOperationGuest operation {operationName.@enum.com.vmware.vc.guestOp} performed on Virtual machine {vm.name}.
com.vmware.vc.guestOperations.GuestOperationAuthFailureGuest operation authentication failed for operation {operationName.@enum.com.vmware.vc.guestOp} on Virtual machine {vm.name}.
com.vmware.vc.host.clear.vFlashResource.reachthresholdHost's virtual flash resource usage dropped below {1}%.
com.vmware.vc.host.problem.vFlashResource.reachthresholdHost's virtual flash resource usage is more than {1}%.
com.vmware.vc.host.vFlash.defaultModuleChangedEventAny new virtual Flash Read Cache configuration request will use {vFlashModule} as default virtual flash module. All existing virtual Flash Read Cache configurations remain unchanged.
com.vmware.vc.iofilter.HostVendorProviderRegistrationFailedEventvSphere APIs for I/O Filters (VAIO) vendor provider {host.name} registration has failed. Reason : {fault.msg}.
com.vmware.vc.iofilter.HostVendorProviderUnregistrationFailedEventFailed to unregister vSphere APIs for I/O Filters (VAIO) vendor provider {host.name}. Reason : {fault.msg}.
com.vmware.vc.npt.VmAdapterEnteredPassthroughEventNetwork passthrough is active on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name}
com.vmware.vc.npt.VmAdapterExitedPassthroughEventNetwork passthrough is inactive on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name}
com.vmware.vc.ovfconsumers.CloneOvfConsumerStateErrorEventFailed to clone state for the entity '{entityName}' on extension {extensionName}
com.vmware.vc.ovfconsumers.GetOvfEnvironmentSectionsErrorEventFailed to retrieve OVF environment sections for VM '{vm.name}' from extension {extensionName}
com.vmware.vc.ovfconsumers.PowerOnAfterCloneErrorEventPowering on VM '{vm.name}' after cloning was blocked by an extension. Message: {description}
com.vmware.vc.ovfconsumers.RegisterEntityErrorEventFailed to register entity '{entityName}' on extension {extensionName}
com.vmware.vc.ovfconsumers.UnregisterEntitiesErrorEventFailed to unregister entities on extension {extensionName}
com.vmware.vc.ovfconsumers.ValidateOstErrorEventFailed to validate OVF descriptor on extension {extensionName}
com.vmware.vc.rp.ResourcePoolRenamedEventResource pool '{oldName}' has been renamed to '{newName}'
com.vmware.vc.sdrs.DatastoreInMultipleDatacentersEventDatastore cluster {objectName} has one or more datastores {datastore} shared across multiple datacenters
com.vmware.vc.sdrs.StorageDrsEnabledEventEnabled storage DRS on datastore cluster {objectName} with automation level {behavior.@enum.storageDrs.PodConfigInfo.Behavior}
com.vmware.vc.sdrs.StorageDrsNotSupportedHostConnectedToPodEventDatastore cluster {objectName} is connected to one or more hosts {host} that do not support storage DRS
com.vmware.vc.sdrs.StorageDrsStorageMigrationEventStorage DRS migrated disks of VM {vm.name} to datastore {ds.name}
com.vmware.vc.sdrs.StorageDrsStoragePlacementEventStorage DRS placed disks of VM {vm.name} on datastore {ds.name}
com.vmware.vc.sdrs.StoragePodCreatedEventCreated datastore cluster {objectName}
com.vmware.vc.sdrs.StoragePodDestroyedEventRemoved datastore cluster {objectName}
com.vmware.vc.sioc.NotSupportedHostConnectedToDatastoreEventSIOC has detected that a host: {host} connected to a SIOC-enabled datastore: {objectName} is running an older version of ESX that does not support SIOC. This is an unsupported configuration.
com.vmware.vc.sms.LunCapabilityInitEventStorage provider [{providerName}] : system capability warning for {eventSubjectId} : {msgTxt}
com.vmware.vc.sms.LunCapabilityMetEventStorage provider [{providerName}] : system capability normal for {eventSubjectId}
com.vmware.vc.sms.LunCapabilityNotMetEventStorage provider [{providerName}] : system capability alert for {eventSubjectId} : {msgTxt}
com.vmware.vc.sms.ObjectTypeAlarmClearedEventStorage provider [{providerName}] cleared a Storage Alarm of type 'Object' on {eventSubjectId} : {msgTxt}
com.vmware.vc.sms.ObjectTypeAlarmErrorEventStorage provider [{providerName}] raised an alert type 'Object' on {eventSubjectId} : {msgTxt}
com.vmware.vc.sms.ObjectTypeAlarmWarningEventStorage provider [{providerName}] raised a warning of type 'Object' on {eventSubjectId} : {msgTxt}
com.vmware.vc.sms.ThinProvisionedLunThresholdClearedEventStorage provider [{providerName}] : thin provisioning capacity threshold normal for {eventSubjectId}
com.vmware.vc.sms.ThinProvisionedLunThresholdCrossedEventStorage provider [{providerName}] : thin provisioning capacity threshold alert for {eventSubjectId}
com.vmware.vc.sms.ThinProvisionedLunThresholdInitEventStorage provider [{providerName}] : thin provisioning capacity threshold warning for {eventSubjectId}
com.vmware.vc.sms.VasaProviderCertificateHardLimitReachedEventCertificate for storage provider {providerName} will expire very shortly. Expiration date : {expiryDate}
com.vmware.vc.sms.VasaProviderCertificateSoftLimitReachedEventCertificate for storage provider {providerName} will expire soon. Expiration date : {expiryDate}
com.vmware.vc.sms.VasaProviderCertificateValidEventCertificate for storage provider {providerName} is valid
com.vmware.vc.sms.VasaProviderConnectedEventStorage provider {providerName} is connected
com.vmware.vc.sms.VasaProviderDisconnectedEventStorage provider {providerName} is disconnected
com.vmware.vc.sms.VasaProviderRefreshCACertsAndCRLsFailureRefreshing CA certificates and CRLs failed for VASA providers with url : {providerUrls}
com.vmware.vc.sms.datastore.ComplianceStatusCompliantEventVirtual disk {diskKey} on {vmName} connected to datastore {datastore.name} in {datacenter.name} is compliant from storage provider {providerName}.
com.vmware.vc.sms.datastore.ComplianceStatusNonCompliantEventVirtual disk {diskKey} on {vmName} connected to {datastore.name} in {datacenter.name} is not compliant {operationalStatus] from storage provider {providerName}.
com.vmware.vc.sms.datastore.ComplianceStatusUnknownEventVirtual disk {diskKey} on {vmName} connected to {datastore.name} in {datacenter.name} compliance status is unknown from storage provider {providerName}.
com.vmware.vc.sms.provider.health.eventStorage provider [{providerName}] : health event for {eventSubjectId} : {msgTxt}
com.vmware.vc.sms.provider.system.eventStorage provider [{providerName}] : system event : {msgTxt}
com.vmware.vc.sms.vm.ComplianceStatusCompliantEventVirtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} is compliant from storage provider {providerName}.
com.vmware.vc.sms.vm.ComplianceStatusNonCompliantEventVirtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} is not compliant {operationalStatus] from storage provider {providerName}.
com.vmware.vc.sms.vm.ComplianceStatusUnknownEventVirtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} compliance status is unknown from storage provider {providerName}.
com.vmware.vc.spbm.ProfileAssociationFailedEventProfile association/dissociation failed for {entityName}
com.vmware.vc.spbm.ServiceErrorEventConfiguring storage policy failed for VM {entityName}. Verify that SPBM service is healthy. Fault Reason : {errorMessage}
com.vmware.vc.vcp.TestEndEventVM Component Protection test ends on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}
com.vmware.vc.vcp.TestStartEventVM Component Protection test starts on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}
com.vmware.vc.vcp.VmDatastoreFailedEventVirtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {datastore}
com.vmware.vc.vcp.VmNetworkFailedEventVirtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {network}
com.vmware.vc.vcp.VmPowerOffHangEventHA VM Component Protection could not power off virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} successfully after trying {numTimes} times and will keep trying
com.vmware.vc.vcp.VmWaitForCandidateHostEventHA VM Component Protection could not find a destination host for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} after waiting {numSecWait} seconds and will keep trying
com.vmware.vc.vflash.SsdConfigurationFailedEventConfiguration on disk {disk.path} failed. Reason : {fault.msg}
com.vmware.vc.vm.DstVmMigratedEventVirtual machine {vm.name} {newMoRef} in {computeResource.name} in {datacenter.name} was migrated from {oldMoRef}
com.vmware.vc.vm.SrcVmMigratedEventVirtual machine {vm.name} {oldMoRef} in {computeResource.name} in {datacenter.name} was migrated to {newMoRef}
com.vmware.vc.vm.VmAdapterResvNotSatisfiedEventReservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} in datacenter {datacenter.name} is not satisfied
com.vmware.vc.vm.VmAdapterResvSatisfiedEventReservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} in datacenter {datacenter.name} is satisfied
com.vmware.vc.vm.VmStateFailedToRevertToSnapshotFailed to revert the execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} to snapshot {snapshotName}, with ID {snapshotId}
com.vmware.vc.vm.VmStateRevertedToSnapshotThe execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} has been reverted to the state of snapshot {snapshotName}, with ID {snapshotId}
com.vmware.vc.vmam.VmAppHealthMonitoringStateChangedEventvSphere HA detected that the application heartbeat status changed to {status.@enum.VirtualMachine.AppHeartbeatStatusType} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.vmam.VmAppHealthStateChangedEventvSphere HA detected that the application state changed to {state.@enum.vm.GuestInfo.AppStateType} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.vsan.ChecksumNotSupportedDiskFoundEventVirtual SAN disk {disk} on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} does not support checksum
com.vmware.vc.vsan.DatastoreNoCapacityEventVirtual SAN datastore {datastoreName} in cluster {computeResource.name} in datacenter {datacenter.name} does not have capacity
com.vmware.vc.vsan.HostCommunicationErrorEventHost {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} cannot communicate with all other nodes in the Virtual SAN enabled cluster
com.vmware.vc.vsan.HostVendorProviderDeregistrationFailedEventVirtual SAN vendor provider {host.name} registration has failed. Reason : {fault.msg}.
com.vmware.vc.vsan.HostVendorProviderRegistrationFailedEventVirtual SAN vendor provider {host.name} registration has failed. Reason : {fault.msg}.
com.vmware.vc.vsan.RogueHostFoundEventFound host(s) {hostString} participating in the Virtual SAN service in cluster {computeResource.name} in datacenter {datacenter.name} is not a member of this host's vCenter cluster
com.vmware.vc.vsan.TurnDiskLocatorLedOffFailedEventFailed to turn off the locator LED of disk {disk.path}. Reason : {fault.msg}
com.vmware.vc.vsan.TurnDiskLocatorLedOnFailedEventFailed to turn on the locator LED of disk {disk.path}. Reason : {fault.msg}
com.vmware.vc.vsan.VsanHostNeedsUpgradeEventVirtual SAN cluster {computeResource.name} has one or more hosts that need disk format upgrade: {host}. For more detailed information of Virtual SAN upgrade, please see the 'Virtual SAN upgrade procedure' section in the documentation
com.vmware.vim.vsm.dependency.bind.vAppvService dependency '{dependencyName}' on vApp '{targetName}' bound to provider '{providerName}'
com.vmware.vim.vsm.dependency.bind.vmvService dependency '{dependencyName}' on '{vm.name}' bound to provider '{providerName}'
com.vmware.vim.vsm.dependency.create.vAppCreated vService dependency '{dependencyName}' with type '{dependencyType}' on vApp '{targetName}'
com.vmware.vim.vsm.dependency.create.vmCreated vService dependency '{dependencyName}' with type '{dependencyType}' on '{vm.name}'
com.vmware.vim.vsm.dependency.destroy.vAppDestroyed vService dependency '{dependencyName}' on vApp '{targetName}'
com.vmware.vim.vsm.dependency.destroy.vmDestroyed vService dependency '{dependencyName}' on '{vm.name}'
com.vmware.vim.vsm.dependency.reconfigure.vAppReconfigured vService dependency '{dependencyName}' on vApp '{targetName}'
com.vmware.vim.vsm.dependency.reconfigure.vmReconfigured vService dependency '{dependencyName}' on '{vm.name}'
com.vmware.vim.vsm.dependency.unbind.vAppvService dependency '{dependencyName}' on vApp '{targetName}' unbound from provider '{providerName}'
com.vmware.vim.vsm.dependency.unbind.vmvService dependency '{dependencyName}' on '{vm.name}' unbound from provider '{providerName}'
com.vmware.vim.vsm.dependency.update.vAppUpdated vService dependency '{dependencyName}' on vApp '{targetName}'
com.vmware.vim.vsm.dependency.update.vmUpdated vService dependency '{dependencyName}' on '{vm.name}'
com.vmware.vim.vsm.provider.registervService provider '{providerName}' with type '{providerType}' registered for extension '{extensionKey}'
com.vmware.vim.vsm.provider.unregistervService provider '{providerName}' with type '{providerType}' unregistered for extension '{extensionKey}'
com.vmware.vim.vsm.provider.updateUpdating vService provider '{providerName}' registered for extension '{extensionKey}'
esx.audit.account.lockedRemote access for ESXi local user account '{1}' has been locked for {2} seconds after {3} failed login attempts.
esx.audit.account.loginfailuresMultiple remote login failures detected for ESXi local user account '{1}'.
esx.audit.dcui.login.failedAuthentication of user {1} has failed. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.
esx.audit.dcui.login.passwd.changedLogin password for user {1} has been changed. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.
esx.audit.dcui.network.restartA management interface {1} has been restarted. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.
esx.audit.esxcli.host.poweroffThe host is being powered off through esxcli. Reason for powering off: {1}. Please consult vSphere Documentation Center or follow the Ask VMware link for more information.
esx.audit.esxcli.host.restartThe host is being rebooted through esxcli. Reason for reboot: {1}. Please consult vSphere Documentation Center or follow the Ask VMware link for more information.
esx.audit.esximage.hostacceptance.changedHost acceptance level changed from {1} to {2}
esx.audit.esximage.install.securityalertSECURITY ALERT: Installing image profile '{1}' with {2}.
esx.audit.esximage.profile.install.successfulSuccessfully installed image profile '{1}'. Installed {2} VIB(s), removed {3} VIB(s). Please use 'esxcli software profile get' or see log for more detail about the transaction.
esx.audit.esximage.profile.update.successfulSuccessfully updated host to image profile '{1}'. Installed {2} VIB(s), removed {3} VIB(s). Please use 'esxcli software profile get' or see log for more detail about the transaction.
esx.audit.esximage.vib.install.successfulSuccessfully installed {1} VIB(s), removed {2} VIB(s). Please use 'esxcli software profile get' or see log for more detail about the transaction.
esx.audit.esximage.vib.remove.successfulSuccessfully removed {1} VIB(s). Please use 'esxcli software profile get' or see log for more detail about the transaction.
esx.audit.host.maxRegisteredVMsExceededThe number of virtual machines registered on host {host.name} in cluster {computeResource.name} in {datacenter.name} exceeded limit: {current} registered, {limit} is the maximum supported.
esx.audit.net.firewall.config.changedFirewall configuration has changed. Operation '{1}' for rule set {2} succeeded.
esx.audit.net.firewall.enabledFirewall has been enabled for port {1}.
esx.audit.net.firewall.port.hookedPort {1} is now protected by Firewall.
esx.audit.net.firewall.port.removedPort {1} is no longer protected with Firewall.
esx.audit.net.lacp.disableLACP for VDS {1} is disabled.
esx.audit.net.lacp.enableLACP for VDS {1} is enabled.
esx.audit.net.lacp.uplink.connectedLACP info: uplink {1} on VDS {2} got connected.
esx.audit.uw.secpolicy.alldomains.level.changedThe enforcement level for all security domains has been changed to {1}. The enforcement level must always be set to enforcing.
esx.audit.uw.secpolicy.domain.level.changedThe enforcement level for security domain {1} has been changed to {2}. The enforcement level must always be set to enforcing.
esx.audit.vmfs.volume.mountedFile system {1} on volume {2} has been mounted in {3} mode on this host.
esx.audit.vmfs.volume.umountedThe volume {1} has been safely un-mounted. The datastore is no longer accessible on this host.
esx.clear.net.connectivity.restoredNetwork connectivity restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up.
esx.clear.net.dvport.connectivity.restoredNetwork connectivity restored on DVPorts: {1}. Physical NIC {2} is up.
esx.clear.net.dvport.redundancy.restoredUplink redundancy restored on DVPorts: {1}. Physical NIC {2} is up.
esx.clear.net.lacp.lag.transition.upLACP info: LAG {1} on VDS {2} is up.
esx.clear.net.lacp.uplink.transition.upLACP info: uplink {1} on VDS {2} is moved into link aggregation group.
esx.clear.net.lacp.uplink.unblockedLACP info: uplink {1} on VDS {2} is unblocked.
esx.clear.net.redundancy.restoredUplink redundancy restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up.
esx.clear.net.vmnic.linkstate.upPhysical NIC {1} linkstate is up.
esx.clear.scsi.device.io.latency.improvedDevice {1} performance has improved. I/O latency reduced from {2} microseconds to {3} microseconds.
esx.clear.scsi.device.state.onDevice {1}, has been turned on administratively.
esx.clear.scsi.device.state.permanentloss.deviceonlineDevice {1}, that was permanently inaccessible is now online. No data consistency guarantees.
esx.clear.storage.apd.exitDevice or filesystem with identifier {1} has exited the All Paths Down state.
esx.clear.storage.connectivity.restoredConnectivity to storage device {1} (Datastores: {2}) restored. Path {3} is active again.
esx.clear.storage.redundancy.restoredPath redundancy to storage device {1} (Datastores: {2}) restored. Path {3} is active again.
esx.problem.3rdParty.errorA 3rd party component, {1}, running on ESXi has reported an error. Please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}.
esx.problem.3rdParty.infoA 3rd party component, {1}, running on ESXi has reported an informational event. If needed, please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}.
esx.problem.3rdParty.warningA 3rd party component, {1}, running on ESXi has reported a warning related to a problem. Please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}.
esx.problem.apei.bert.memory.error.correctedA corrected memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10}
esx.problem.apei.bert.memory.error.fatalA fatal memory error occurred in the last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10}
esx.problem.apei.bert.memory.error.recoverableA recoverable memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10}
esx.problem.apei.bert.pcie.error.correctedA corrected PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}.
esx.problem.apei.bert.pcie.error.fatalPlatform encounterd a fatal PCIe error in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}.
esx.problem.apei.bert.pcie.error.recoverableA recoverable PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}.
esx.problem.application.core.dumpedAn application ({1}) running on ESXi host has crashed ({2} time(s) so far). A core file might have been created at {3}.
esx.problem.coredump.capacity.insufficientThe storage capacity of the coredump targets is insufficient to capture a complete coredump. Recommended coredump capacity is {1} MiB.
esx.problem.coredump.copyspaceThe free space available in default coredump copy location is insufficient to copy new coredumps. Recommended free space is {1} MiB.
esx.problem.coredump.extraction.failed.nospaceThe given partition has insufficient amount of free space to extract the coredump. At least {1} MiB is required.
esx.problem.cpu.smp.ht.invalidDisabling HyperThreading due to invalid configuration: Number of threads: {1}, Number of PCPUs: {2}.
esx.problem.cpu.smp.ht.numpcpus.maxFound {1} PCPUs, but only using {2} of them due to specified limit.
esx.problem.cpu.smp.ht.partner.missingDisabling HyperThreading due to invalid configuration: HT partner {1} is missing from PCPU {2}.
esx.problem.dhclient.lease.noneUnable to obtain a DHCP lease on interface {1}.
esx.problem.dhclient.lease.offered.errorNo expiry time on offered DHCP lease from {1}.
esx.problem.esximage.install.errorCould not install image profile: {1}
esx.problem.esximage.install.invalidhardwareHost doesn't meet image profile '{1}' hardware requirements: {2}
esx.problem.esximage.install.stage.errorCould not stage image profile '{1}': {2}
esx.problem.hardware.acpi.interrupt.routing.device.invalidSkipping interrupt routing entry with bad device number: {1}. This is a BIOS bug.
esx.problem.hardware.acpi.interrupt.routing.pin.invalidSkipping interrupt routing entry with bad device pin: {1}. This is a BIOS bug.
esx.problem.hardware.ioapic.missingIOAPIC Num {1} is missing. Please check BIOS settings to enable this IOAPIC.
esx.problem.hostd.core.dumped{1} crashed ({2} time(s) so far) and a core file might have been created at {3}. This might have caused connections to the host to be dropped.
esx.problem.iorm.badversionHost {1} cannot participate in Storage I/O Control(SIOC) on datastore {2} because the version number {3} of the SIOC agent on this host is incompatible with number {4} of its counterparts on other hosts connected to this datastore.
esx.problem.iorm.nonviworkloadAn unmanaged I/O workload is detected on a SIOC-enabled datastore: {1}.
esx.problem.migrate.vmotion.server.pending.cnx.listen.socket.shutdownThe ESXi host's vMotion network server encountered an error while monitoring incoming network connections. Shutting down listener socket. vMotion might not be possible with this host until vMotion is manually re-enabled. Failure status: {1}
esx.problem.net.connectivity.lostLost network connectivity on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.
esx.problem.net.dvport.connectivity.lostLost network connectivity on DVPorts: {1}. Physical NIC {2} is down.
esx.problem.net.dvport.redundancy.degradedUplink redundancy degraded on DVPorts: {1}. Physical NIC {2} is down.
esx.problem.net.dvport.redundancy.lostLost uplink redundancy on DVPorts: {1}. Physical NIC {2} is down.
esx.problem.net.e1000.tso6.notsupportedGuest-initiated IPv6 TCP Segmentation Offload (TSO) packets ignored. Manually disable TSO inside the guest operating system in virtual machine {1}, or use a different virtual adapter.
esx.problem.net.fence.port.badfenceidVMkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: invalid fenceId.
esx.problem.net.fence.resource.limitedVmkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: maximum number of fence networks or ports have been reached.
esx.problem.net.fence.switch.unavailableVmkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: dvSwitch fence property is not set.
esx.problem.net.firewall.config.failedFirewall configuration operation '{1}' failed. The changes were not applied to rule set {2}.
esx.problem.net.firewall.port.hookfailedAdding port {1} to Firewall failed.
esx.problem.net.gateway.set.failedCannot connect to the specified gateway {1}. Failed to set it.
esx.problem.net.heap.belowthreshold{1} free size dropped below {2} percent.
esx.problem.net.lacp.lag.transition.downLACP warning: LAG {1} on VDS {2} is down.
esx.problem.net.lacp.peer.noresponseLACP error: No peer response on uplink {1} for VDS {2}.
esx.problem.net.lacp.policy.incompatibleLACP error: Current teaming policy on VDS {1} is incompatible, supported is IP hash only.
esx.problem.net.lacp.policy.linkstatusLACP error: Current teaming policy on VDS {1} is incompatible, supported link failover detection is link status only.
esx.problem.net.lacp.uplink.blockedLACP warning: uplink {1} on VDS {2} is blocked.
esx.problem.net.lacp.uplink.disconnectedLACP warning: uplink {1} on VDS {2} got disconnected.
esx.problem.net.lacp.uplink.fail.duplexLACP error: Duplex mode across all uplink ports must be full, VDS {1} uplink {2} has different mode.
esx.problem.net.lacp.uplink.fail.speedLACP error: Speed across all uplink ports must be same, VDS {1} uplink {2} has different speed.
esx.problem.net.lacp.uplink.inactiveLACP error: All uplinks on VDS {1} must be active.
esx.problem.net.lacp.uplink.transition.downLACP warning: uplink {1} on VDS {2} is moved out of link aggregation group.
esx.problem.net.migrate.bindtovmkThe ESX advanced configuration option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for improved performance. Update the configuration option with a valid vmknic. Alternatively, if you do not want vMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank.
esx.problem.net.migrate.unsupported.latencyESXi has detected {1}ms round-trip vMotion network latency between host {2} and {3}. High latency vMotion networks are supported only if both ESXi hosts have been configured for vMotion latency tolerance.
esx.problem.net.portset.port.fullPortset {1} has reached the maximum number of ports ({2}). Cannot apply for any more free ports.
esx.problem.net.portset.port.vlan.invalidid{1} VLANID {2} is invalid. VLAN ID must be between 0 and 4095.
esx.problem.net.proxyswitch.port.unavailableVirtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. There are no more ports available on the host proxy switch.
esx.problem.net.redundancy.degradedUplink redundancy degraded on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.
esx.problem.net.redundancy.lostLost uplink redundancy on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.
esx.problem.net.uplink.mtu.failedVMkernel failed to set the MTU value {1} on the uplink {2}.
esx.problem.net.vmknic.ip.duplicateA duplicate IP address was detected for {1} on the interface {2}. The current owner is {3}.
esx.problem.net.vmnic.linkstate.downPhysical NIC {1} linkstate is down.
esx.problem.net.vmnic.linkstate.flappingTaking down physical NIC {1} because the link is unstable.
esx.problem.net.vmnic.watchdog.resetUplink {1} has recovered from a transient failure due to watchdog timeout
esx.problem.ntpd.clock.correction.errorNTP daemon stopped. Time correction {1} > {2} seconds. Manually set the time and restart ntpd.
esx.problem.pageretire.platform.retire.requestMemory page retirement requested by platform firmware. FRU ID: {1}. Refer to System Hardware Log: {2}
esx.problem.pageretire.selectedmpnthreshold.host.exceededNumber of host physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}).
esx.problem.scratch.partition.size.smallSize of scratch partition {1} is too small. Recommended scratch partition size is {2} MiB.
esx.problem.scratch.partition.unconfiguredNo scratch partition has been configured. Recommended scratch partition size is {} MiB.
esx.problem.scsi.device.close.failedFailed to close the device {1} properly, plugin {2}.
esx.problem.scsi.device.detach.failedDetach failed for device :{1}. Exceeded the number of devices that can be detached, please cleanup stale detach entries.
esx.problem.scsi.device.filter.attach.failedFailed to attach filters to device '%s' during registration. Plugin load failed or the filter rules are incorrect.
esx.problem.scsi.device.io.bad.plugin.typeBad plugin type for device {1}, plugin {2}
esx.problem.scsi.device.io.inquiry.failedFailed to get standard inquiry for device {1} from Plugin {2}.
esx.problem.scsi.device.io.latency.highDevice {1} performance has deteriorated. I/O latency increased from average value of {2} microseconds to {3} microseconds.
esx.problem.scsi.device.io.qerr.change.configQErr set to 0x{1} for device {2}. This may cause unexpected behavior. The system is not configured to change the QErr setting of device. The QErr value supported by system is 0x{3}. Please check the SCSI ChangeQErrSetting configuration value for ESX.
esx.problem.scsi.device.io.qerr.changedQErr set to 0x{1} for device {2}. This may cause unexpected behavior. The device was originally configured to the supported QErr setting of 0x{3}, but this has been changed and could not be changed back.
esx.problem.scsi.device.is.local.failedFailed to verify if the device {1} from plugin {2} is a local - not shared - device
esx.problem.scsi.device.is.pseudo.failedFailed to verify if the device {1} from plugin {2} is a pseudo device
esx.problem.scsi.device.is.ssd.failedFailed to verify if the device {1} from plugin {2} is a Solid State Disk device
esx.problem.scsi.device.limitreachedThe maximum number of supported devices of {1} has been reached. A device from plugin {2} could not be created.
esx.problem.scsi.device.state.offDevice {1}, has been turned off administratively.
esx.problem.scsi.device.state.permanentlossDevice {1} has been removed or is permanently inaccessible. Affected datastores (if any): {2}.
esx.problem.scsi.device.state.permanentloss.noopensPermanently inaccessible device {1} has no more opens. It is now safe to unmount datastores (if any) {2} and delete the device.
esx.problem.scsi.device.state.permanentloss.pluggedbackDevice {1} has been plugged back in after being marked permanently inaccessible. No data consistency guarantees.
esx.problem.scsi.device.state.permanentloss.withreservationheldDevice {1} has been removed or is permanently inaccessible, while holding a reservation. Affected datastores (if any): {2}.
esx.problem.scsi.device.thinprov.atquotaSpace utilization on thin-provisioned device {1} exceeded configured threshold. Affected datastores (if any): {2}.
esx.problem.scsi.scsipath.badpath.unreachpeSanity check failed for path {1}. The path is to a vVol PE, but it goes out of adapter {2} which is not PE capable. Path dropped.
esx.problem.scsi.scsipath.badpath.unsafepeSanity check failed for path {1}. Could not safely determine if the path is to a vVol PE. Path dropped.
esx.problem.scsi.scsipath.limitreachedThe maximum number of supported paths of {1} has been reached. Path {2} could not be added.
esx.problem.scsi.unsupported.plugin.typeScsi Device Allocation not supported for plugin type {1}
esx.problem.storage.apd.startDevice or filesystem with identifier {1} has entered the All Paths Down state.
esx.problem.storage.apd.timeoutDevice or filesystem with identifier {1} has entered the All Paths Down Timeout state after being in the All Paths Down state for {2} seconds. I/Os will now be fast failed.
esx.problem.storage.connectivity.deviceporFrequent PowerOn Reset Unit Attentions are occurring on device {1}. This might indicate a storage problem. Affected datastores: {2}
esx.problem.storage.connectivity.lostLost connectivity to storage device {1}. Path {2} is down. Affected datastores: {3}.
esx.problem.storage.connectivity.pathporFrequent PowerOn Reset Unit Attentions are occurring on path {1}. This might indicate a storage problem. Affected device: {2}. Affected datastores: {3}
esx.problem.storage.connectivity.pathstatechangesFrequent path state changes are occurring for path {1}. This might indicate a storage problem. Affected device: {2}. Affected datastores: {3}
esx.problem.storage.iscsi.discovery.connect.erroriSCSI discovery to {1} on {2} failed. The iSCSI Initiator could not establish a network connection to the discovery address.
esx.problem.storage.iscsi.discovery.login.erroriSCSI discovery to {1} on {2} failed. The Discovery target returned a login error of: {3}.
esx.problem.storage.iscsi.target.connect.errorLogin to iSCSI target {1} on {2} failed. The iSCSI initiator could not establish a network connection to the target.
esx.problem.storage.iscsi.target.login.errorLogin to iSCSI target {1} on {2} failed. Target returned login error of: {3}.
esx.problem.storage.iscsi.target.permanently.lostThe iSCSI target {2} was permanently removed from {1}.
esx.problem.storage.redundancy.degradedPath redundancy to storage device {1} degraded. Path {2} is down. Affected datastores: {3}.
esx.problem.storage.redundancy.lostLost path redundancy to storage device {1}. Path {2} is down. Affected datastores: {3}.
esx.problem.vfat.filesystem.full.otherThe VFAT filesystem {1} (UUID {2}) is full.
esx.problem.vfat.filesystem.full.scratchThe host's scratch partition, which is the VFAT filesystem {1} (UUID {2}), is full.
esx.problem.visorfs.inodetable.fullThe root filesystem's file table is full. As a result, the file {1} could not be created by the application '{2}'.
esx.problem.visorfs.ramdisk.fullThe ramdisk '{1}' is full. As a result, the file {2} could not be written.
esx.problem.visorfs.ramdisk.inodetable.fullThe file table of the ramdisk '{1}' is full. As a result, the file {2} could not be created by the application '{3}'.
esx.problem.vm.kill.unexpected.fault.failureThe VM using the config file {1} could not fault in a guest physical page from the hypervisor level swap file at {2}. The VM is terminated as further progress is impossible.
esx.problem.vm.kill.unexpected.forcefulPageRetireThe VM using the config file {1} contains the host physical page {2} which was scheduled for immediate retirement. To avoid system instability the VM is forcefully powered off.
esx.problem.vm.kill.unexpected.noSwapResponseThe VM using the config file {1} did not respond to {2} swap actions in {3} seconds and is forcefully powered off to prevent system instability.
esx.problem.vm.kill.unexpected.vmtrackThe VM using the config file {1} is allocating too many pages while system is critically low in free memory. It is forcefully terminated to prevent system instability.
esx.problem.vmfs.ats.incompatibility.detectedMulti-extent ATS-only volume '{1}' ({2}) is unable to use ATS because HardwareAcceleratedLocking is disabled on this host: potential for introducing filesystem corruption. Volume should not be used from other hosts.
esx.problem.vmfs.ats.support.lostATS-Only VMFS volume '{1}' not mounted. Host does not support ATS or ATS initialization has failed.
esx.problem.vmfs.error.volume.is.lockedVolume on device {1} is locked, possibly because some remote host encountered an error during a volume operation and could not recover.
esx.problem.vmfs.extent.offlineAn attached device {1} may be offline. The file system {2} is now in a degraded state. While the datastore is still available, parts of data that reside on the extent that went offline might be inaccessible.
esx.problem.vmfs.extent.onlineDevice {1} backing file system {2} came online. This extent was previously offline. All resources on this device are now available.
esx.problem.vmfs.heartbeat.recoveredSuccessfully restored access to volume {1} ({2}) following connectivity issues.
esx.problem.vmfs.heartbeat.timedoutLost access to volume {1} ({2}) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly.
esx.problem.vmfs.heartbeat.unrecoverableLost connectivity to volume {1} ({2}) and subsequent recovery attempts have failed.
esx.problem.vmfs.journal.createfailedNo space for journal on volume {1} ({2}). Volume will remain in read-only metadata mode with limited write support until journal can be created.
esx.problem.vmfs.lock.corruptondiskAt least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume might be damaged too.
esx.problem.vmfs.lockmode.inconsistency.detectedInconsistent lockmode change detected for VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. Protocol error during ATS transition. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume.
esx.problem.vmfs.nfs.server.disconnectLost connection to server {1} mount point {2} mounted as {3} ({4}).
esx.problem.vmfs.nfs.server.restoredRestored connection to server {1} mount point {2} mounted as {3} ({4}).
esx.problem.vmfs.resource.corruptondiskAt least one corrupt resource metadata region was detected on volume {1} ({2}). Other regions of the volume might be damaged too.
esx.problem.vmfs.spanned.lockmode.inconsistency.detectedInconsistent lockmode change detected for spanned VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. All operations on this volume will fail until this host unmounts and remounts the volume.
esx.problem.vmfs.spanstate.incompatibility.detectedIncompatible span change detected for VMFS volume '{1} ({2})': volume was not spanned at time of open but now it is, and this host is using ATS-only lockmode but the volume is not ATS-only. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume.
esx.problem.vmsyslogd.remote.failureThe host "{1}" has become unreachable. Remote logging to this host has stopped.
esx.problem.vmsyslogd.storage.logdir.invalidThe configured log directory {1} cannot be used. The default directory {2} will be used instead.
esx.problem.vmsyslogd.unexpectedLog daemon has failed for an unexpected reason: {1}
esx.problem.vpxa.core.dumped{1} crashed ({2} time(s) so far) and a core file might have been created at {3}. This might have caused connections to the host to be dropped.
hbr.primary.AppQuiescedDeltaCompletedEventApplication consistent sync completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred)
hbr.primary.DeltaAbortedEventSync aborted for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.hbr.primary.ReasonForDeltaAbort}
hbr.primary.DeltaCompletedEventSync completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred).
hbr.primary.FSQuiescedDeltaCompletedEventFile system consistent sync completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred)
hbr.primary.FailedToStartDeltaEventFailed to start sync for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.fault.ReplicationVmFault.ReasonForFault}
hbr.primary.FailedToStartSyncEventFailed to start full sync for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.fault.ReplicationVmFault.ReasonForFault}
hbr.primary.InvalidDiskReplicationConfigurationEventReplication configuration is invalid for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}, disk {diskKey}: {reasonForFault.@enum.fault.ReplicationDiskConfigFault.ReasonForFault}
hbr.primary.InvalidVmReplicationConfigurationEventReplication configuration is invalid for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reasonForFault.@enum.fault.ReplicationVmConfigFault.ReasonForFault}
hbr.primary.NoConnectionToHbrServerEventNo connection to VR Server for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.hbr.primary.ReasonForNoServerConnection}
hbr.primary.NoProgressWithHbrServerEventVR Server error for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.hbr.primary.ReasonForNoServerProgress}
hbr.primary.SyncCompletedEventFull sync completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred).
hbr.primary.UnquiescedDeltaCompletedEventQuiescing failed or the virtual machine is powered off. Unquiesced crash consistent sync completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred).
hbr.primary.VmReplicationConfigurationChangedEventReplication configuration changed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({numDisks} disks, {rpo} minutes RPO, VR Server is {vrServerAddress}:{vrServerPort}).
vim.event.LicenseDowngradedEventLicense downgrade: {licenseKey} removes the following features: {lostFeatures}
vprob.net.connectivity.lostLost network connectivity on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.
vprob.net.e1000.tso6.notsupportedGuest-initiated IPv6 TCP Segmentation Offload (TSO) packets ignored. Manually disable TSO inside the guest operating system in virtual machine {1}, or use a different virtual adapter.
vprob.net.migrate.bindtovmkThe ESX advanced config option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for improved performance. Please update the config option with a valid vmknic or, if you do not want vMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank.
vprob.net.proxyswitch.port.unavailableVirtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. There are no more ports available on the host proxy switch.
vprob.net.redundancy.degradedUplink redundancy degraded on virtual switch {1}. Physical NIC {2} is down. {3} uplinks still up. Affected portgroups:{4}.
vprob.net.redundancy.lostLost uplink redundancy on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.
vprob.scsi.device.thinprov.atquotaSpace utilization on thin-provisioned device {1} exceeded configured threshold.
vprob.storage.connectivity.lostLost connectivity to storage device {1}. Path {2} is down. Affected datastores: {3}.
vprob.storage.redundancy.degradedPath redundancy to storage device {1} degraded. Path {2} is down. {3} remaining active paths. Affected datastores: {4}.
vprob.storage.redundancy.lostLost path redundancy to storage device {1}. Path {2} is down. Affected datastores: {3}.
vprob.vmfs.error.volume.is.lockedVolume on device {1} is locked, possibly because some remote host encountered an error during a volume operation and could not recover.
vprob.vmfs.extent.offlineAn attached device {1} might be offline. The file system {2} is now in a degraded state. While the datastore is still available, parts of data that reside on the extent that went offline might be inaccessible.
vprob.vmfs.extent.onlineDevice {1} backing file system {2} came online. This extent was previously offline. All resources on this device are now available.
vprob.vmfs.heartbeat.recoveredSuccessfully restored access to volume {1} ({2}) following connectivity issues.
vprob.vmfs.heartbeat.timedoutLost access to volume {1} ({2}) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly.
vprob.vmfs.heartbeat.unrecoverableLost connectivity to volume {1} ({2}) and subsequent recovery attempts have failed.
vprob.vmfs.journal.createfailedNo space for journal on volume {1} ({2}). Opening volume in read-only metadata mode with limited write support.
vprob.vmfs.lock.corruptondiskAt least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume may be damaged too.
vprob.vmfs.nfs.server.disconnectLost connection to server {1} mount point {2} mounted as {3} ({4}).
vprob.vmfs.nfs.server.restoredRestored connection to server {1} mount point {2} mounted as {3} ({4}).
vprob.vmfs.resource.corruptondiskAt least one corrupt resource metadata region was detected on volume {1} ({2}). Other regions of the volume might be damaged too.
com.vmware.cl.CopyLibraryItemEventCopied Library Item {targetLibraryItemName} to Library {targetLibraryName}({targetLibraryId}). Source Library Item {sourceLibraryItemName}({sourceLibraryItemId}), source Library {sourceLibraryName}({sourceLibraryId}).
com.vmware.cl.CopyLibraryItemFailEventFailed to copy Library Item {targetLibraryItemName}.
com.vmware.cl.CreateLibraryEventCreated Library {libraryName}
com.vmware.cl.CreateLibraryFailEventFailed to create Library {libraryName}
com.vmware.cl.CreateLibraryItemEventCreated Library Item {libraryItemName} in Library {libraryName}({libraryId}).
com.vmware.cl.CreateLibraryItemFailEventFailed to create Library Item {libraryItemName}.
com.vmware.cl.DeleteLibraryEventDeleted Library {libraryName}
com.vmware.cl.DeleteLibraryFailEventFailed to delete Library
com.vmware.cl.DeleteLibraryItemEventDeleted Library Item {libraryItemName} in Library {libraryName}({libraryId}).
com.vmware.cl.DeleteLibraryItemFailEventFailed to delete Library Item.
com.vmware.cl.UpdateLibraryEventUpdated Library {libraryName}
com.vmware.cl.UpdateLibraryFailEventFailed to update Library
com.vmware.cl.UpdateLibraryItemEventUpdated Library Item {libraryItemName} in Library {libraryName}({libraryId}).
com.vmware.cl.UpdateLibraryItemFailEventFailed to update Library Item.
com.vmware.rbd.activateRuleSetActivate Rule Set
com.vmware.rbd.fdmPackageMissingA host in a HA cluster does not have the 'vmware-fdm' package in its image profile
com.vmware.rbd.hostProfileRuleAssocEventA host profile associated with one or more active rules was deleted.
com.vmware.rbd.ignoreMachineIdentityIgnoring the AutoDeploy.MachineIdentity event, since the host is already provisioned through Auto Deploy
com.vmware.rbd.pxeBootNoImageRuleUnable to PXE boot host since it does not match any rules
com.vmware.rbd.pxeBootUnknownHostPXE Booting unknown host
com.vmware.rbd.pxeProfileAssocAttach PXE Profile
com.vmware.rbd.vmcaCertGenerationFailureEventFailed to generate host certificates using VMCA
com.vmware.vim.eam.agency.create{agencyName} created by {ownerName}
com.vmware.vim.eam.agency.destroyed{agencyName} removed from the vSphere ESX Agent Manager
com.vmware.vim.eam.agency.goalstate{agencyName} changed goal state from {oldGoalState} to {newGoalState}
com.vmware.vim.eam.agency.statusChangedAgency status changed from {oldStatus} to {newStatus}
com.vmware.vim.eam.agency.updatedConfiguration updated {agencyName}
com.vmware.vim.eam.agent.createdAgent added to host {host.name} ({agencyName})
com.vmware.vim.eam.agent.destroyedAgent removed from host {host.name} ({agencyName})
com.vmware.vim.eam.agent.destroyedNoHostAgent removed from host ({agencyName})
com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterPowerOnAgent VM {vm.name} has been powered on. Mark agent as available to proceed agent workflow ({agencyName})
com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterProvisioningAgent VM {vm.name} has been provisioned. Mark agent as available to proceed agent workflow ({agencyName})
com.vmware.vim.eam.agent.statusChangedAgent status changed from {oldStatus} to {newStatus}
com.vmware.vim.eam.agent.task.deleteVmAgent VM {vmName} is deleted on host {host.name} ({agencyName})
com.vmware.vim.eam.agent.task.deployVmAgent VM {vm.name} is provisioned on host {host.name} ({agencyName})
com.vmware.vim.eam.agent.task.powerOffVmAgent VM {vm.name} powered off, on host {host.name} ({agencyName})
com.vmware.vim.eam.agent.task.powerOnVmAgent VM {vm.name} powered on, on host {host.name} ({agencyName})
com.vmware.vim.eam.agent.task.vibInstalledAgent installed VIB {vib} on host {host.name} ({agencyName})
com.vmware.vim.eam.agent.task.vibUninstalledAgent uninstalled VIB {vib} on host {host.name} ({agencyName})
com.vmware.vim.eam.issue.cannotAccessAgentOVFUnable to access agent OVF package at {url} ({agencyName})
com.vmware.vim.eam.issue.cannotAccessAgentVibUnable to access agent VIB module at {url} ({agencyName})
com.vmware.vim.eam.issue.hostInMaintenanceModeAgent cannot complete an operation since the host {host.name} is in maintenance mode ({agencyName})
com.vmware.vim.eam.issue.hostInStandbyModeAgent cannot complete an operation since the host {host.name} is in standby mode ({agencyName})
com.vmware.vim.eam.issue.hostPoweredOffAgent cannot complete an operation since the host {host.name} is powered off ({agencyName})
com.vmware.vim.eam.issue.incompatibleHostVersionAgent is not deployed due to incompatible host {host.name} ({agencyName})
com.vmware.vim.eam.issue.insufficientIpAddressesInsufficient IP addresses in network protocol profile in agent's VM network ({agencyName})
com.vmware.vim.eam.issue.insufficientResourcesAgent cannot be provisioned due to insufficient resources on host {host.name} ({agencyName})
com.vmware.vim.eam.issue.insufficientSpaceAgent on {host.name} cannot be provisioned due to insufficient space on datastore ({agencyName})
com.vmware.vim.eam.issue.missingAgentIpPoolNo network protocol profile associated to agent's VM network ({agencyname})
com.vmware.vim.eam.issue.missingDvFilterSwitchdvFilter switch is not configured on host {host.name} ({agencyname})
com.vmware.vim.eam.issue.noAgentVmDatastoreNo agent datastore configuration on host {host.name} ({agencyName})
com.vmware.vim.eam.issue.noAgentVmNetworkNo agent network configuration on host {host.name} ({agencyName})
com.vmware.vim.eam.issue.noCustomAgentVmDatastoreAgent datastore(s) {customAgentVmDatastoreName} not available on host {host.name} ({agencyName})
com.vmware.vim.eam.issue.noCustomAgentVmNetworkAgent network(s) {customAgentVmNetworkName} not available on host {host.name} ({agencyName})
com.vmware.vim.eam.issue.orphandedDvFilterSwitchUnused dvFilter switch on host {host.name} ({agencyName})
com.vmware.vim.eam.issue.orphanedAgencyOrphaned agency found. ({agencyName})
com.vmware.vim.eam.issue.ovfInvalidFormatOVF used to provision agent on host {host.name} has invalid format ({agencyName})
com.vmware.vim.eam.issue.ovfInvalidPropertyOVF environment used to provision agent on host {host.name} has one or more invalid properties ({agencyName})
com.vmware.vim.eam.issue.resolvedIssue {type} resolved (key {key})
com.vmware.vim.eam.issue.vibCannotPutHostInMaintenanceModeCannot put host into maintenance mode ({agencyName})
com.vmware.vim.eam.issue.vibInvalidFormatInvalid format for VIB module at {url} ({agencyName})
com.vmware.vim.eam.issue.vibNotInstalledVIB module for agent is not installed on host {host.name} ({agencyName})
com.vmware.vim.eam.issue.vibRequiresHostInMaintenanceModeHost must be put into maintenance mode to complete agent VIB installation ({agencyName})
com.vmware.vim.eam.issue.vibRequiresHostRebootHost {host.name} must be reboot to complete agent VIB installation ({agencyName})
com.vmware.vim.eam.issue.vibRequiresManualInstallationVIB {vib} requires manual installation on host {host.name} ({agencyName})
com.vmware.vim.eam.issue.vibRequiresManualUninstallationVIB {vib} requires manual uninstallation on host {host.name} ({agencyName})
com.vmware.vim.eam.issue.vmCorruptedAgent VM {vm.name} on host {host.name} is corrupted ({agencyName})
com.vmware.vim.eam.issue.vmDeployedAgent VM {vm.name} is provisioned on host {host.name} when it should be removed ({agencyName})
com.vmware.vim.eam.issue.vmMarkedAsTemplateAgent VM {vm.name} on host {host.name} is marked as template ({agencyName})
com.vmware.vim.eam.issue.vmNotDeployedAgent VM is missing on host {host.name} ({agencyName})
com.vmware.vim.eam.issue.vmOrphanedOrphaned agent VM {vm.name} on host {host.name} detected ({agencyName})
com.vmware.vim.eam.issue.vmPoweredOffAgent VM {vm.name} on host {host.name} is expected to be powered on ({agencyName})
com.vmware.vim.eam.issue.vmPoweredOnAgent VM {vm.name} on host {host.name} is expected to be powered off ({agencyName})
com.vmware.vim.eam.issue.vmSuspendedAgent VM {vm.name} on host {host.name} is expected to be powered on but is suspended ({agencyName})
com.vmware.vim.eam.issue.vmWrongFolderAgent VM {vm.name} on host {host.name} is in the wrong VM folder ({agencyName})
com.vmware.vim.eam.issue.vmWrongResourcePoolAgent VM {vm.name} on host {host.name} is in the resource pool ({agencyName})
com.vmware.vim.eam.login.succeededSuccessful login by {user} into vSphere ESX Agent Manager
com.vmware.vim.eam.logoutUser {user} logged out of vSphere ESX Agent Manager by logging out of the vCenter server
com.vmware.vim.eam.task.setupDvFilterDvFilter switch '{switchName}' is setup on host {host.name}
com.vmware.vim.eam.task.tearDownDvFilterDvFilter switch '{switchName}' is teared down on host {host.name}
com.vmware.vim.eam.unauthorized.accessUnauthorized access by {user} in vSphere ESX Agent Manager
com.vmware.vim.eam.vum.failedtouploadvibFailed to upload {vibUrl} to VMware Update Manager ({agencyName})
ExitedStandbyModeEventThe host {host.name} is no longer in standby mode
ExitingStandbyModeEventThe host {host.name} is exiting standby mode
ExitMaintenanceModeEventHost {host.name} in {datacenter.name} has exited maintenance mode
ExitStandbyModeFailedEventThe host {host.name} could not exit standby mode
ad.event.ImportCertEventImport certificate succeeded.
ad.event.ImportCertFailedEventImport certificate failed.
ad.event.JoinDomainEventJoin domain succeeded.
ad.event.JoinDomainFailedEventJoin domain failed.
ad.event.LeaveDomainEventLeave domain succeeded.
ad.event.LeaveDomainFailedEventLeave domain failed.
com.vmware.license.HostLicenseExpiredEventExpired host license or evaluation period.
com.vmware.license.HostSubscriptionLicenseExpiredEventExpired host time-limited license.
com.vmware.license.VcLicenseExpiredEventExpired vCenter Server license or evaluation period.
com.vmware.license.VcSubscriptionLicenseExpiredEventExpired vCenter Server time-limited license.
com.vmware.license.vsan.HostSsdOverUsageEventThe capacity of the flash disks on the host exceeds the limit of the Virtual SAN license.
com.vmware.license.vsan.LicenseExpiryEventExpired Virtual SAN license or evaluation period.
com.vmware.license.vsan.SubscriptionLicenseExpiredEventExpired Virtual SAN time-limited license.
com.vmware.vc.HA.AllHostAddrsPingableThe vSphere HA agent on the host {host.name} in cluster {computeResource.name} in {datacenter.name} can reach all the cluster management addresses
com.vmware.vc.HA.AllIsoAddrsPingableAll vSphere HA isolation addresses are reachable by host {host.name} in cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.HA.AnsweredVmLockLostQuestionEventvSphere HA answered the lock-lost question on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name}
com.vmware.vc.HA.AnsweredVmTerminatePDLEventvSphere HA answered a question from host {host.name} in cluster {computeResource.name} about terminating virtual machine {vm.name}
com.vmware.vc.HA.AutoStartDisabledvSphere HA disabled the automatic Virtual Machine Startup/Shutdown feature on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Automatic VM restarts will interfere with HA when reacting to a host failure.
com.vmware.vc.HA.CannotResetVmWithInaccessibleDatastorevSphere HA did not reset VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} because the VM had files on inaccessible datastore(s)
com.vmware.vc.HA.ClusterContainsIncompatibleHostsvSphere HA Cluster {computeResource.name} in {datacenter.name} contains ESX/ESXi 3.5 hosts and more recent host versions, which isn't fully supported.
com.vmware.vc.HA.ClusterFailoverActionCompletedEventvSphere HA completed a virtual machine failover action in cluster {computeResource.name} in datacenter {datacenter.name}
com.vmware.vc.HA.ConnectedToMastervSphere HA agent on host {host.name} connected to the vSphere HA master on host {masterHostName} in cluster {computeResource.name} in datacenter {datacenter.name}
com.vmware.vc.HA.CreateConfigVvolFailedEventvSphere HA failed to create a configuration vVol for this datastore and so will not be able to protect virtual machines on the datastore until the problem is resolved. Error: {fault}
com.vmware.vc.HA.CreateConfigVvolSucceededEventvSphere HA successfully created a configuration vVol after the previous failure
com.vmware.vc.HA.DasAgentRunningEventvSphere HA agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is running
com.vmware.vc.HA.DasFailoverHostFailedEventvSphere HA detected a possible failure of failover host {host.name} in cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.HA.DasFailoverHostIsolatedEventHost {host.name} has been isolated from cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.HA.DasFailoverHostPartitionedEventFailover Host {host.name} in {computeResource.name} in {datacenter.name} is in a different network partition than the master
com.vmware.vc.HA.DasFailoverHostUnreachableEventThe vSphere HA agent on the failover host {host.name} in cluster {computeResource.name} in {datacenter.name} is not reachable but host responds to ICMP pings
com.vmware.vc.HA.DasHostFailedEventvSphere HA detected a possible host failure of host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}
com.vmware.vc.HA.DasHostIsolatedEventvSphere HA detected that host {host.name} is isolated from cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.HA.DasHostMonitoringDisabledEventvSphere HA host monitoring is disabled. No virtual machine failover will occur until Host Monitoring is re-enabled for cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.HA.FailedRestartAfterIsolationEventvSphere HA was unable to restart virtual machine {vm.name} in cluster {computeResource.name} in datacenter {datacenter.name} after it was powered off in response to a network isolation event. The virtual machine should be manually powered back on.
com.vmware.vc.HA.HostDasAgentHealthyEventvSphere HA agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is healthy
com.vmware.vc.HA.HostDoesNotSupportVsanvSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} in {datacenter.name} because vCloud Distributed Storage is enabled but the host does not support that feature
com.vmware.vc.HA.HostHasNoIsolationAddrsDefinedHost {host.name} in cluster {computeResource.name} in {datacenter.name} has no isolation addresses defined as required by vSphere HA.
com.vmware.vc.HA.HostHasNoMountedDatastoresvSphere HA cannot be configured on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} because there are no mounted datastores.
com.vmware.vc.HA.HostHasNoSslThumbprintvSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} because its SSL thumbprint has not been verified. Check that vCenter Server is configured to verify SSL thumbprints and that the thumbprint for {host.name} has been verified.
com.vmware.vc.HA.HostIncompatibleWithHAThe product version of host {host.name} in cluster {computeResource.name} in {datacenter.name} is incompatible with vSphere HA.
com.vmware.vc.HA.HostPartitionedFromMasterEventvSphere HA detected that host {host.name} is in a different network partition than the master {computeResource.name} in {datacenter.name}
com.vmware.vc.HA.HostUnconfigureErrorThere was an error unconfiguring the vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name}. To solve this problem, reconnect the host to vCenter Server.
com.vmware.vc.HA.VMIsHADisabledIsolationEventvSphere HA did not perform an isolation response for {vm.name} in cluster {computeResource.name} in {datacenter.name} because its VM restart priority is Disabled
com.vmware.vc.HA.VMIsHADisabledRestartEventvSphere HA did not attempt to restart {vm.name} in cluster {computeResource.name} in {datacenter.name} because its VM restart priority is Disabled
com.vmware.vc.HA.VcCannotFindMasterEventvCenter Server is unable to find a master vSphere HA agent in cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.HA.VmDasResetAbortedEventvSphere HA was unable to reset VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} after {retryTimes} retries
com.vmware.vc.HA.VmNotProtectedEventVirtual machine {vm.name} in cluster {computeResource.name} in {datacenter.name} failed to become vSphere HA Protected and HA may not attempt to restart it after a failure.
com.vmware.vc.HA.VmProtectedEventVirtual machine {vm.name} in cluster {computeResource.name} in {datacenter.name} is vSphere HA Protected and HA will attempt to restart it after a failure.
com.vmware.vc.HA.VmUnprotectedEventVirtual machine {vm.name} in cluster {computeResource.name} in {datacenter.name} is not vSphere HA Protected.
com.vmware.vc.HA.VmUnprotectedOnDiskSpaceFullvSphere HA has unprotected virtual machine {vm.name} in cluster {computeResource.name} in datacenter {datacenter.name} because it ran out of disk space
com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastorevSphere HA did not terminate VM {vm.name} affected by an inaccessible datastore on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore}
com.vmware.vc.HA.VmcpStorageFailureClearedDatastore {ds.name} mounted on host {host.name} was inaccessible. The condition was cleared and the datastore is now accessible
com.vmware.vc.HA.VmcpStorageFailureDetectedForVmvSphere HA detected that a datastore mounted on host {host.name} in cluster {computeResource.name} in {datacenter.name} was inaccessible due to {failureType.@enum.com.vmware.vc.HA.VmcpStorageFailureDetectedForVm}. This affected VM {vm.name} with files on the datastore
com.vmware.vc.HA.VmcpTerminateVmAbortedvSphere HA was unable to terminate VM {vm.name} affected by an inaccessible datastore on host {host.name} in cluster {computeResource.name} in {datacenter.name} after {retryTimes} retries
com.vmware.vc.HA.VmcpTerminatingVmvSphere HA attempted to terminate VM {vm.name} on host{host.name} in cluster {computeResource.name} in {datacenter.name} because the VM was affected by an inaccessible datastore
com.vmware.vc.VmDiskConsolidatedEventVirtual machine {vm.name} disks consolidated successfully on {host.name} in cluster {computeResource.name} in {datacenter.name}.
com.vmware.vc.VmDiskConsolidationNeededVirtual machine {vm.name} disks consolidation is needed on {host.name} in cluster {computeResource.name} in {datacenter.name}.
com.vmware.vc.VmDiskConsolidationNoLongerNeededVirtual machine {vm.name} disks consolidation is no longer needed on {host.name} in cluster {computeResource.name} in {datacenter.name}.
com.vmware.vc.VmDiskFailedToConsolidateEventVirtual machine {vm.name} disks consolidation failed on {host.name} in cluster {computeResource.name} in {datacenter.name}.
com.vmware.vc.certmgr.HostCertManagementModeChangedEventHost Certificate Management Mode changed from {previousMode} to {presentMode}
com.vmware.vc.certmgr.HostCertMetadataChangedEventHost Certificate Management Metadata changed
com.vmware.vc.dvs.LacpConfigInconsistentEventSingle Link Aggregation Control Group is enabled on Uplink Port Groups while enhanced LACP support is enabled.
com.vmware.vc.ft.VmAffectedByDasDisabledEventvSphere HA has been disabled in cluster {computeResource.name} of datacenter {datacenter.name}. vSphere HA will not restart VM {vm.name} or its Secondary VM after a failure.
com.vmware.vc.ha.VmRestartedByHAEventvSphere HA restarted virtual machine {vm.name} on host {host.name} in cluster {computeResource.name}
com.vmware.vc.host.AutoStartReconfigureFailedEventReconfiguring autostart rules for virtual machines on {host.name} in datacenter {datacenter.name} failed
com.vmware.vc.host.clear.vFlashResource.inaccessibleHost's virtual flash resource is restored to be accessible.
com.vmware.vc.host.problem.DeprecatedVMFSVolumeFoundDeprecated VMFS volume(s) found on the host. Please consider upgrading volume(s) to the latest version.
com.vmware.vc.host.problem.vFlashResource.inaccessibleHost's virtual flash resource is inaccessible.
com.vmware.vc.host.vFlash.VFlashResourceCapacityExtendedEventVirtual flash resource capacity is extended
com.vmware.vc.host.vFlash.VFlashResourceConfiguredEventVirtual flash resource is configured on the host
com.vmware.vc.host.vFlash.VFlashResourceRemovedEventVirtual flash resource is removed from the host
com.vmware.vc.host.vFlash.modulesLoadedEventVirtual flash modules are loaded or reloaded on the host
com.vmware.vc.iofilter.FilterInstallationFailedEventvSphere APIs for I/O Filters (VAIO) installation of filters on cluster {computeResource.name} in datacenter {datacenter.name} has failed
com.vmware.vc.iofilter.FilterInstallationSuccessEventvSphere APIs for I/O Filters (VAIO) installation of filters on cluster {computeResource.name} in datacenter {datacenter.name} is successful
com.vmware.vc.iofilter.FilterUninstallationFailedEventvSphere APIs for I/O Filters (VAIO) uninstallation of filters on cluster {computeResource.name} in datacenter {datacenter.name} has failed
com.vmware.vc.iofilter.FilterUninstallationSuccessEventvSphere APIs for I/O Filters (VAIO) uninstallation of filters on cluster {computeResource.name} in datacenter {datacenter.name} is successful
com.vmware.vc.iofilter.FilterUpgradeFailedEventvSphere APIs for I/O Filters (VAIO) upgrade of filters on cluster {computeResource.name} in datacenter {datacenter.name} has failed
com.vmware.vc.iofilter.FilterUpgradeSuccessEventvSphere APIs for I/O Filters (VAIO) upgrade of filters on cluster {computeResource.name} in datacenter {datacenter.name} has succeeded
com.vmware.vc.iofilter.HostVendorProviderRegistrationSuccessEventvSphere APIs for I/O Filters (VAIO) vendor provider {host.name} has been successfully registered
com.vmware.vc.iofilter.HostVendorProviderUnregistrationSuccessEventFailed to unregister vSphere APIs for I/O Filters (VAIO) vendor provider {host.name}
com.vmware.vc.profile.AnswerFileExportedEventAnswer file for host {host.name} in datacenter {datacenter.name} has been exported
com.vmware.vc.profile.AnswerFileUpdatedEventHost customization settings for host {host.name} in datacenter {datacenter.name} has been updated
com.vmware.vc.sdrs.CanceledDatastoreMaintenanceModeEventThe datastore maintenance mode operation has been canceled
com.vmware.vc.sdrs.ConfiguredStorageDrsOnPodEventConfigured storage DRS on datastore cluster {objectName}
com.vmware.vc.sdrs.ConsistencyGroupViolationEventDatastore cluster {objectName} has datastores that belong to different SRM Consistency Groups
com.vmware.vc.sdrs.DatastoreEnteredMaintenanceModeEventDatastore {ds.name} has entered maintenance mode
com.vmware.vc.sdrs.DatastoreEnteringMaintenanceModeEventDatastore {ds.name} is entering maintenance mode
com.vmware.vc.sdrs.DatastoreExitedMaintenanceModeEventDatastore {ds.name} has exited maintenance mode
com.vmware.vc.sdrs.DatastoreMaintenanceModeErrorsEventDatastore {ds.name} encountered errors while entering maintenance mode
com.vmware.vc.sdrs.StorageDrsDisabledEventDisabled storage DRS on datastore cluster {objectName}
com.vmware.vc.sdrs.StorageDrsInvocationFailedEventStorage DRS invocation failed on datastore cluster {objectName}
com.vmware.vc.sdrs.StorageDrsNewRecommendationPendingEventA new storage DRS recommendation has been generated on datastore cluster {objectName}
com.vmware.vc.sdrs.StorageDrsRecommendationAppliedAll pending recommendations on datastore cluster {objectName} were applied
com.vmware.vc.sms.VasaProviderRefreshCACertsAndCRLsSuccessRefreshing CA certificates and CRLs succeeded for all registered VASA providers.
com.vmware.vc.stats.HostQuickStatesNotUpToDateEventQuick stats on {host.name} in {computeResource.name} in {datacenter.name} is not up-to-date
com.vmware.vc.vcp.FtDisabledVmTreatAsNonFtEventHA VM Component Protection protects virtual machine {vm.name} on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because the FT state is disabled
com.vmware.vc.vcp.FtFailoverEventFT Primary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is going to fail over to Secondary VM due to component failure
com.vmware.vc.vcp.FtFailoverFailedEventFT virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to failover to secondary
com.vmware.vc.vcp.FtSecondaryRestartEventHA VM Component Protection is restarting FT secondary virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to component failure
com.vmware.vc.vcp.FtSecondaryRestartFailedEventFT Secondary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restart
com.vmware.vc.vcp.NeedSecondaryFtVmTreatAsNonFtEventHA VM Component Protection protects virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because it has been in the needSecondary state too long
com.vmware.vc.vcp.VcpNoActionEventHA VM Component Protection did not take action on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to the feature configuration setting
com.vmware.vc.vcp.VmRestartEventHA VM Component Protection is restarting virtual machine {vm.name} due to component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}
com.vmware.vc.vcp.VmRestartFailedEventVirtual machine {vm.name} affected by component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restart
com.vmware.vc.vm.PowerOnAfterCloneErrorEventVirtual machine {vm.name} failed to power on after cloning on host {host.name} in datacenter {datacenter.name}
com.vmware.vc.vm.VmRegisterFailedEventVirtual machine {vm.name} registration on {host.name} in datacenter {datacenter.name} failed
com.vmware.vc.vmam.AppMonitoringNotSupportedApplication monitoring is not supported on {host.name} in cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.vmam.VmDasAppHeartbeatFailedEventvSphere HA detected application heartbeat failure for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}
com.vmware.vc.vsan.ChecksumDisabledHostFoundEventFound a checksum disabled host {host.name} in a checksum protected vCenter Server cluster {computeResource.name} in datacenter {datacenter.name}
com.vmware.vc.vsan.HostNotInClusterEvent{host.name} with Virtual SAN service enabled is not in the vCenter cluster {computeResource.name} in datacenter {datacenter.name}
com.vmware.vc.vsan.HostNotInVsanClusterEvent{host.name} is in a Virtual SAN enabled cluster {computeResource.name} in datacenter {datacenter.name} but does not have Virtual SAN service enabled
com.vmware.vc.vsan.HostVendorProviderDeregistrationSuccessEventVirtual SAN vendor provider {host.name} has been successfully unregistered
com.vmware.vc.vsan.HostVendorProviderRegistrationSuccessEventVirtual SAN vendor provider {host.name} has been successfully registered
com.vmware.vc.vsan.NetworkMisConfiguredEventVirtual SAN network is not configured on {host.name}, in cluster {computeResource.name}, and in datacenter {datacenter.name}
esx.audit.dcui.defaults.factoryrestoreThe host has been restored to default factory settings. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.
esx.audit.dcui.disabledThe DCUI has been disabled.
esx.audit.dcui.enabledThe DCUI has been enabled.
esx.audit.dcui.host.rebootThe host is being rebooted through the Direct Console User Interface (DCUI). Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.
esx.audit.dcui.host.shutdownThe host is being shut down through the Direct Console User Interface (DCUI). Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.
esx.audit.dcui.hostagents.restartThe management agents on the host are being restarted. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.
esx.audit.dcui.network.factoryrestoreThe host has been restored to factory network settings. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.
esx.audit.esximage.install.novalidationAttempting to install an image profile with validation disabled. This may result in an image with unsatisfied dependencies, file or package conflicts, and potential security violations.
esx.audit.host.bootHost has booted.
esx.audit.host.stop.rebootHost is rebooting.
esx.audit.host.stop.shutdownHost is shutting down.
esx.audit.lockdownmode.disabledAdministrator access to the host has been enabled.
esx.audit.lockdownmode.enabledAdministrator access to the host has been disabled.
esx.audit.lockdownmode.exceptions.changedList of lockdown exception users has been changed.
esx.audit.maintenancemode.canceledThe host has canceled entering maintenance mode.
esx.audit.maintenancemode.enteredThe host has entered maintenance mode.
esx.audit.maintenancemode.enteringThe host has begun entering maintenance mode.
esx.audit.maintenancemode.exitedThe host has exited maintenance mode.
esx.audit.net.firewall.disabledFirewall has been disabled.
esx.audit.shell.disabledThe ESXi command line shell has been disabled.
esx.audit.shell.enabledThe ESXi command line shell has been enabled.
esx.audit.ssh.disabledSSH access has been disabled.
esx.audit.ssh.enabledSSH access has been enabled.
esx.audit.usb.config.changedUSB configuration has changed on host {host.name} in cluster {computeResource.name} in {datacenter.name}.
esx.audit.vmfs.lvm.device.discoveredOne or more LVM devices have been discovered on this host.
esx.audit.vsan.clustering.enabledVirtual SAN clustering and directory services have been enabled.
esx.audit.vsan.net.vnic.addedVirtual SAN virtual NIC has been added.
esx.clear.coredump.configuredA vmkcore disk partition is available and/or a network coredump server has been configured. Host core dumps will be saved.
esx.clear.coredump.configured2At least one coredump target has been configured. Host core dumps will be saved.
esx.problem.coredump.unconfiguredNo vmkcore disk partition is available and no network coredump server has been configured. Host core dumps cannot be saved.
esx.problem.coredump.unconfigured2No coredump target has been configured. Host core dumps cannot be saved.
esx.problem.cpu.amd.mce.dram.disabledDRAM ECC not enabled. Please enable it in BIOS.
esx.problem.cpu.intel.ioapic.listing.errorNot all IO-APICs are listed in the DMAR. Not enabling interrupt remapping on this platform.
esx.problem.cpu.mce.invalidMCE monitoring will be disabled as an unsupported CPU was detected. Please consult the ESX HCL for information on supported hardware.
esx.problem.host.coredumpAn unread host kernel core dump has been found.
esx.problem.migrate.vmotion.default.heap.create.failedFailed to create default migration heap. This might be the result of severe host memory pressure or virtual address space exhaustion. Migration might still be possible, but will be unreliable in cases of extreme host memory pressure.
esx.problem.scsi.apd.event.descriptor.alloc.failedNo memory to allocate APD (All Paths Down) event subsystem.
esx.problem.scsi.device.io.invalid.disk.qfull.valueQFullSampleSize should be bigger than QFullThreshold. LUN queue depth throttling algorithm will not function as expected. Please set the QFullSampleSize and QFullThreshold disk configuration values in ESX correctly.
esx.problem.syslog.configSystem logging is not configured on host {host.name}. Please check Syslog options for the host under Configuration -> Software -> Advanced Settings in vSphere client.
esx.problem.syslog.nonpersistentSystem logs on host {host.name} are stored on non-persistent storage. Consult product documentation to configure a syslog server or a scratch partition.
esx.problem.visorfs.failureAn operation on the root filesystem has failed.
esx.problem.vmsyslogd.storage.failureLogging to storage has failed. Logs are no longer being stored locally on this host.
hbr.primary.ConnectionRestoredToHbrServerEventConnection to VR Server restored for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}.
hbr.primary.DeltaStartedEventSync started by {userName} for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}.
hbr.primary.QuiesceNotSupportedQuiescing is not supported for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}.
hbr.primary.RpoOkForServerEventVR Server is compatible with support the configured RPO for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}.
hbr.primary.RpoTooLowForServerEventVR Server does not support the configured RPO for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}.
hbr.primary.SyncStartedEventFull sync started by {userName} for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}.
vim.event.SubscriptionLicenseExpiredEventThe time-limited license on host {host.name} has expired. To comply with the EULA, renew the license at http://my.vmware.com
com.vmware.vim.eam.issue.unknownAgentVmUnknown agent VM {vm.name}
com.vmware.vim.eam.login.invalidFailed login to vSphere ESX Agent Manager
com.vmware.vim.eam.task.scanForUnknownAgentVmsCompletedScan for unknown agent VMs completed
com.vmware.vim.eam.task.scanForUnknownAgentVmsInitiatedScan for unknown agent VMs initiated
FailoverLevelRestoredSufficient resources are available to satisfy vSphere HA failover level in cluster {computeResource.name} in {datacenter.name}
GeneralEventGeneral event: {message}
GeneralHostErrorEventError detected on {host.name} in {datacenter.name}: {message}
GeneralHostInfoEventIssue detected on {host.name} in {datacenter.name}: {message}
GeneralHostWarningEventIssue detected on {host.name} in {datacenter.name}: {message}
GeneralUserEventUser logged event: {message}
GeneralVmErrorEventError detected for {vm.name} on {host.name} in {datacenter.name}: {message}
GeneralVmInfoEventIssue detected for {vm.name} on {host.name} in {datacenter.name}: {message}
GeneralVmWarningEventIssue detected for {vm.name} on {host.name} in {datacenter.name}: {message}
GhostDvsProxySwitchDetectedEventThe vSphere Distributed Switch corresponding to the proxy switches {switchUuid} on the host {host.name} does not exist in vCenter Server or does not contain this host.
GhostDvsProxySwitchRemovedEventA ghost proxy switch {switchUuid} on the host {host.name} was resolved.
GlobalMessageChangedEventThe message changed: {message}
HealthStatusChangedEvent{componentName} status changed from {oldStatus} to {newStatus}
HostAddedEventAdded host {host.name} to datacenter {datacenter.name}
HostAddFailedEventCannot add host {hostname} to datacenter {datacenter.name}
HostAdminDisableEventAdministrator access to the host {host.name} is disabled
HostAdminEnableEventAdministrator access to the host {host.name} has been restored
HostCnxFailedAccountFailedEventCannot connect {host.name} in {datacenter.name}: cannot configure management account
HostCnxFailedAlreadyManagedEventCannot connect {host.name} in {datacenter.name}: already managed by {serverName}
HostCnxFailedBadCcagentEventCannot connect host {host.name} in {datacenter.name} : server agent is not responding
HostCnxFailedBadUsernameEventCannot connect {host.name} in {datacenter.name}: incorrect user name or password
HostCnxFailedBadVersionEventCannot connect {host.name} in {datacenter.name}: incompatible version
HostCnxFailedCcagentUpgradeEventCannot connect host {host.name} in {datacenter.name}. Did not install or upgrade vCenter agent service.
HostCnxFailedEventCannot connect {host.name} in {datacenter.name}: error connecting to host
HostCnxFailedNetworkErrorEventCannot connect {host.name} in {datacenter.name}: network error
HostCnxFailedNoAccessEventCannot connect host {host.name} in {datacenter.name}: account has insufficient privileges
HostCnxFailedNoConnectionEventCannot connect host {host.name} in {datacenter.name}
HostCnxFailedNoLicenseEventCannot connect {host.name} in {datacenter.name}: not enough CPU licenses
HostCnxFailedNotFoundEventCannot connect {host.name} in {datacenter.name}: incorrect host name
HostCnxFailedTimeoutEventCannot connect {host.name} in {datacenter.name}: time-out waiting for host response
HostComplianceCheckedEventHost {host.name} checked for compliance.
HostCompliantEventHost {host.name} is in compliance with the attached profile
HostConfigAppliedEventHost configuration changes applied.
HostConnectedEventConnected to {host.name} in {datacenter.name}
HostConnectionLostEventHost {host.name} in {datacenter.name} is not responding
HostDasDisabledEventvSphere HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} is disabled
HostDasDisablingEventvSphere HA is being disabled on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}
HostDasEnabledEventvSphere HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} is enabled
HostDasEnablingEventEnabling vSphere HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name}
HostDasErrorEventvSphere HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} has an error {message}: {reason.@enum.HostDasErrorEvent.HostDasErrorReason}
HostDasOkEventvSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name} is configured correctly
HostDisconnectedEventDisconnected from {host.name} in {datacenter.name}. Reason: {reason.@enum.HostDisconnectedEvent.ReasonCode}
HostDVPortEventdvPort connected to host {host.name} in {datacenter.name} changed status
HostEnableAdminFailedEventCannot restore some administrator permissions to the host {host.name}
HostExtraNetworksEventHost {host.name} has the following extra networks not used by other hosts for vSphere HA communication:{ips}. Consider using vSphere HA advanced option das.allowNetwork to control network usage
HostGetShortNameFailedEventCannot complete command 'hostname -s' on host {host.name} or returned incorrect name format
HostInAuditModeEventHost {host.name} is running in audit mode. The host's configuration will not be persistent across reboots.
HostInventoryFullEventMaximum ({capacity}) number of hosts allowed for this edition of vCenter Server has been reached
HostInventoryUnreadableEventThe virtual machine inventory file on host {host.name} is damaged or unreadable.
HostIpChangedEventIP address of the host {host.name} changed from {oldIP} to {newIP}
HostIpInconsistentEventConfiguration of host IP address is inconsistent on host {host.name}: address resolved to {ipAddress} and {ipAddress2}
HostIpToShortNameFailedEventCannot resolve IP address to short name on host {host.name}
HostIsolationIpPingFailedEventvSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name} could not reach isolation address: {isolationIp}
HostLicenseExpiredEventA host license for {host.name} has expired
HostLocalPortCreatedEventA host local port {hostLocalPort.portKey} is created on vSphere Distributed Switch {hostLocalPort.switchUuid} to recover from management network connectivity loss on virtual NIC device {hostLocalPort.vnic} on the host {host.name}.
HostMissingNetworksEventHost {host.name} does not have the following networks used by other hosts for vSphere HA communication:{ips}. Consider using vSphere HA advanced option das.allowNetwork to control network usage
HostMonitoringStateChangedEventvSphere HA host monitoring state in {computeResource.name} in {datacenter.name} changed to {state.@enum.DasConfigInfo.ServiceState}
HostNoAvailableNetworksEventHost {host.name} in cluster {computeResource.name} in {datacenter.name} currently has no available networks for vSphere HA Communication. The following networks are currently used by HA: {ips}
HostNoHAEnabledPortGroupsEventHost {host.name} in cluster {computeResource.name} in {datacenter.name} has no port groups enabled for vSphere HA communication.
HostNonCompliantEventHost {host.name} is not in compliance with the attached profile
HostNoRedundantManagementNetworkEventHost {host.name} in cluster {computeResource.name} in {datacenter.name} currently has no management network redundancy
HostNotInClusterEventHost {host.name} is not a cluster member in {datacenter.name}
HostOvercommittedEventInsufficient capacity in host {computeResource.name} to satisfy resource configuration in {datacenter.name}
HostPrimaryAgentNotShortNameEventPrimary agent {primaryAgent} was not specified as a short name to host {host.name}
HostProfileAppliedEventProfile is applied on the host {host.name}
HostReconnectionFailedEventCannot reconnect to {host.name} in {datacenter.name}
HostRemovedEventRemoved host {host.name} in {datacenter.name}
HostShortNameInconsistentEventHost names {shortName} and {shortName2} both resolved to the same IP address. Check the host's network configuration and DNS entries
HostShortNameToIpFailedEventCannot resolve short name {shortName} to IP address on host {host.name}
HostShutdownEventShut down of {host.name} in {datacenter.name}: {reason}
HostStatusChangedEventConfiguration status on host {computeResource.name} changed from {oldStatus.@enum.ManagedEntity.Status} to {newStatus.@enum.ManagedEntity.Status} in {datacenter.name}
HostSyncFailedEventCannot synchronize host {host.name}. {reason.msg}
HostUpgradeFailedEventCannot install or upgrade vCenter agent service on {host.name} in {datacenter.name}
HostUserWorldSwapNotEnabledEventThe userworld swap is not enabled on the host {host.name}
HostVnicConnectedToCustomizedDVPortEventHost {host.name} vNIC {vnic.vnic} was reconfigured to use dvPort {vnic.port.portKey} with port level configuration, which might be different from the dvPort group.
HostWwnChangedEventWWNs are changed for {host.name}
HostWwnConflictEventThe WWN ({wwn}) of {host.name} conflicts with the currently registered WWN
IncorrectHostInformationEventHost {host.name} did not provide the information needed to acquire the correct set of licenses
InfoUpgradeEvent{message}
InsufficientFailoverResourcesEventInsufficient resources to satisfy vSphere HA failover level on cluster {computeResource.name} in {datacenter.name}
InvalidEditionEventThe license edition '{feature}' is invalid
IScsiBootFailureEventBooting from iSCSI failed with an error. See the VMware Knowledge Base for information on configuring iBFT networking.
LicenseExpiredEventLicense {feature.featureName} has expired
LicenseNonComplianceEventLicense inventory is not compliant. Licenses are overused
LicenseRestrictedEventUnable to acquire licenses due to a restriction in the option file on the license server.
LicenseServerAvailableEventLicense server {licenseServer} is available
LicenseServerUnavailableEventLicense server {licenseServer} is unavailable
LocalDatastoreCreatedEventCreated local datastore {datastore.name} on {host.name} in {datacenter.name}
LocalTSMEnabledEventESXi Shell for the host {host.name} has been enabled
LockerMisconfiguredEventDatastore {datastore} which is configured to back the locker does not exist
LockerReconfiguredEventLocker was reconfigured from {oldDatastore} to {newDatastore} datastore
MigrationErrorEventUnable to migrate {vm.name} from {host.name} in {datacenter.name}: {fault.msg}
MigrationHostErrorEventUnable to migrate {vm.name} from {host.name} to {dstHost.name} in {datacenter.name}: {fault.msg}
MigrationHostWarningEventMigration of {vm.name} from {host.name} to {dstHost.name} in {datacenter.name}: {fault.msg}
MigrationResourceErrorEventCannot migrate {vm.name} from {host.name} to {dstHost.name} and resource pool {dstPool.name} in {datacenter.name}: {fault.msg}
MigrationResourceWarningEventMigration of {vm.name} from {host.name} to {dstHost.name} and resource pool {dstPool.name} in {datacenter.name}: {fault.msg}
MigrationWarningEventMigration of {vm.name} from {host.name} in {datacenter.name}: {fault.msg}
MtuMatchEventThe MTU configured in the vSphere Distributed Switch matches the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}
MtuMismatchEventThe MTU configured in the vSphere Distributed Switch does not match the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}
NASDatastoreCreatedEventCreated NAS datastore {datastore.name} on {host.name} in {datacenter.name}
NetworkRollbackEventNetwork configuration on the host {host.name} is rolled back as it disconnects the host from vCenter server.
NoAccessUserEventCannot login user {userName}@{ipAddress}: no permission
NoDatastoresConfiguredEventNo datastores have been configured on the host {host.name}
NoLicenseEventA required license {feature.featureName} is not reserved
NoMaintenanceModeDrsRecommendationForVMUnable to automatically migrate {vm.name} from {host.name}
NonVIWorkloadDetectedOnDatastoreEventAn unmanaged I/O workload is detected on a SIOC-enabled datastore: {datastore.name}.
NotEnoughResourcesToStartVmEventInsufficient resources to fail over {vm.name} in {computeResource.name} that recides in {datacenter.name}. vSphere HA will retry the fail over when enough resources are available. Reason: {reason.@enum.fdm.placementFault}
OutOfSyncDvsHostThe vSphere Distributed Switch configuration on some hosts differed from that of the vCenter Server.
PermissionAddedEventPermission created for {principal} on {entity.name}, role is {role.name}, propagation is {propagate.@enum.auth.Permission.propagate}
PermissionRemovedEventPermission rule removed for {principal} on {entity.name}
PermissionUpdatedEventPermission changed for {principal} on {entity.name}, role is {role.name}, propagation is {propagate.@enum.auth.Permission.propagate}
ProfileAssociatedEventProfile {profile.name} attached.
ProfileChangedEventProfile {profile.name} was changed.
ProfileCreatedEventProfile is created.
ProfileDissociatedEventProfile {profile.name} detached.
ProfileReferenceHostChangedEventProfile {profile.name} reference host changed.
ProfileRemovedEventProfile was removed.
RecoveryEventThe host {hostName} network connectivity was recovered on the management virtual NIC {vnic} by connecting to a new port {portKey} on the vSphere Distributed Switch {dvsUuid}.
RemoteTSMEnabledEventSSH for the host {host.name} has been enabled
ResourcePoolCreatedEventCreated resource pool {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name}
ResourcePoolDestroyedEventRemoved resource pool {resourcePool.name} on {computeResource.name} in {datacenter.name}
ResourcePoolMovedEventMoved resource pool {resourcePool.name} from {oldParent.name} to {newParent.name} on {computeResource.name} in {datacenter.name}
ResourcePoolReconfiguredEventUpdated configuration for {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name}
ResourceViolatedEventResource usage exceeds configuration for resource pool {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name}
RoleAddedEventNew role {role.name} created
RoleRemovedEventRole {role.name} removed
RoleUpdatedEventModified role {role.name}
RollbackEventThe Network API {methodName} on this entity caused the host {hostName} to be disconnected from the vCenter Server. The configuration change was rolled back on the host.
ScheduledTaskCompletedEventTask {scheduledTask.name} on {entity.name} in datacenter {datacenter.name} completed successfully
ScheduledTaskCreatedEventCreated task {scheduledTask.name} on {entity.name} in datacenter {datacenter.name}
ScheduledTaskEmailCompletedEventTask {scheduledTask.name} on {entity.name} in datacenter {datacenter.name} sent email to {to}
ScheduledTaskEmailFailedEventTask {scheduledTask.name} on {entity.name} in datacenter {datacenter.name} cannot send email to {to}: {reason.msg}
ScheduledTaskFailedEventTask {scheduledTask.name} on {entity.name} in datacenter {datacenter.name} cannot be completed: {reason.msg}
ScheduledTaskReconfiguredEventReconfigured task {scheduledTask.name} on {entity.name} in datacenter {datacenter.name}
ScheduledTaskRemovedEventRemoved task {scheduledTask.name} on {entity.name} in datacenter {datacenter.name}
ScheduledTaskStartedEventRunning task {scheduledTask.name} on {entity.name} in datacenter {datacenter.name}
ServerLicenseExpiredEventA vCenter Server license has expired
ServerStartedSessionEventvCenter started
SessionTerminatedEventA session for user '{terminatedUsername}' has stopped
TaskEventTask: {info.descriptionId}
TaskTimeoutEventTask: {info.descriptionId} time-out
TeamingMatchEventTeaming configuration in the vSphere Distributed Switch {dvs.name} on host {host.name} matches the physical switch configuration in {datacenter.name}. Detail: {healthResult.summary.@enum.dvs.VmwareDistributedVirtualSwitch.TeamingMatchStatus}
TeamingMisMatchEventTeaming configuration in the vSphere Distributed Switch {dvs.name} on host {host.name} does not match the physical switch configuration in {datacenter.name}. Detail: {healthResult.summary.@enum.dvs.VmwareDistributedVirtualSwitch.TeamingMatchStatus}
TemplateBeingUpgradedEventUpgrading template {legacyTemplate}
TemplateUpgradedEventTemplate {legacyTemplate} upgrade completed
TemplateUpgradeFailedEventCannot upgrade template {legacyTemplate} due to: {reason.msg}
TimedOutHostOperationEventThe operation performed on {host.name} in {datacenter.name} timed out
UnlicensedVirtualMachinesEventThere are {unlicensed} unlicensed virtual machines on host {host} - there are only {available} licenses available
UnlicensedVirtualMachinesFoundEvent{unlicensed} unlicensed virtual machines found on host {host}
UpdatedAgentBeingRestartedEventThe agent on host {host.name} is updated and will soon restart
UplinkPortMtuNotSupportEventNot all VLAN MTU settings on the external physical switch allow the vSphere Distributed Switch maximum MTU size packets to pass on the uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}.
UplinkPortMtuSupportEventAll VLAN MTU settings on the external physical switch allow the vSphere Distributed Switch maximum MTU size packets to pass on the uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}.
UplinkPortVlanTrunkedEventThe configured VLAN in the vSphere Distributed Switch was trunked by the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}.
UplinkPortVlanUntrunkedEventNot all the configured VLANs in the vSphere Distributed Switch were trunked by the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}.
UserAssignedToGroupUser {userLogin} was added to group {group}
UserLoginSessionEventUser {userName}@{ipAddress} logged in as {userAgent}
UserLogoutSessionEventUser {userName}@{ipAddress} logged out (login time: {loginTime}, number of API invocations: {callCount}, user agent: {userAgent})
UserPasswordChangedPassword was changed for account {userLogin} on host {host.name}
UserUnassignedFromGroupUser {userLogin} removed from group {group}
UserUpgradeEvent{message}
VcAgentUninstalledEventvCenter agent has been uninstalled from {host.name} in {datacenter.name}
VcAgentUninstallFailedEventCannot uninstall vCenter agent from {host.name} in {datacenter.name}. {reason.@enum.fault.AgentInstallFailed.Reason}
VcAgentUpgradedEventvCenter agent has been upgraded on {host.name} in {datacenter.name}
VcAgentUpgradeFailedEventCannot upgrade vCenter agent on {host.name} in {datacenter.name}. {reason.@enum.fault.AgentInstallFailed.Reason}
VimAccountPasswordChangedEventVIM account password was changed on host {host.name}
VmAcquiredMksTicketEventRemote console to {vm.name} on {host.name} in {datacenter.name} has been opened
VmAcquiredTicketEventA ticket for {vm.name} of type {ticketType.@enum.VirtualMachine.TicketType} on {host.name} in {datacenter.name} has been acquired
VmAutoRenameEventInvalid name for {vm.name} on {host.name} in {datacenter.name}. Renamed from {oldName} to {newName}
VmBeingClonedEventCloning {vm.name} on host {host.name} in {datacenter.name} to {destName} on host {destHost.name}
VmBeingClonedNoFolderEventCloning {vm.name} on host {host.name} in {datacenter.name} to {destName} on host {destHost.name} to a vApp
VmBeingCreatedEventCreating {vm.name} on host {host.name} in {datacenter.name}
VmBeingDeployedEventDeploying {vm.name} on host {host.name} in {datacenter.name} from template {srcTemplate.name}
VmBeingHotMigratedEventMigrating {vm.name} from {host.name}, {ds.name} to {destHost.name}, {destDatastore.name} in {datacenter.name}
VmBeingMigratedEventRelocating {vm.name} from {host.name}, {ds.name} in {datacenter.name} to {destHost.name}, {destDatastore.name} in {destDatacenter.name}
VmBeingRelocatedEventRelocating {vm.name} in {datacenter.name} from {host.name}, {ds.name} to {destHost.name}, {destDatastore.name}
VmClonedEventClone of {sourceVm.name} completed
VmCloneFailedEventCannot clone {vm.name}: {reason.msg}
VmConfigMissingEventConfiguration file for {vm.name} on {host.name} in {datacenter.name} cannot be found
VmConnectedEventVirtual machine {vm.name} is connected
VmCreatedEventCreated virtual machine {vm.name} on {host.name} in {datacenter.name}
VmDasBeingResetEvent{vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} reset by vSphere HA. Reason: {reason.@enum.VmDasBeingResetEvent.ReasonCode}
VmDasBeingResetWithScreenshotEvent{vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} reset by vSphere HA. Reason: {reason.@enum.VmDasBeingResetEvent.ReasonCode}. A screenshot is saved at {screenshotFilePath}.
VmDasResetFailedEventvSphere HA cannot reset {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}
VmDasUpdateErrorEventUnable to update vSphere HA agents given the state of {vm.name}
VmDasUpdateOkEventvSphere HA agents have been updated with the current state of the virtual machine
VmDateRolledBackEventDisconnecting all hosts as the date of virtual machine {vm.name} has been rolled back
VmDeployedEventTemplate {srcTemplate.name} deployed on host {host.name}
VmDeployFailedEventCannot deploy template: {reason.msg}
VmDisconnectedEvent{vm.name} on host {host.name} in {datacenter.name} is disconnected
VmDiscoveredEventDiscovered {vm.name} on {host.name} in {datacenter.name}
VmDiskFailedEventCannot create virtual disk {disk}
VmDVPortEventdvPort connected to VM {vm.name} on {host.name} in {datacenter.name} changed status
VmEmigratingEventMigrating {vm.name} off host {host.name} in {datacenter.name}
VmEndRecordingEventEnd a recording session on {vm.name}
VmEndReplayingEventEnd a replay session on {vm.name}
VmFailedMigrateEventCannot migrate {vm.name} from {host.name}, {ds.name} to {destHost.name}, {destDatastore.name} in {datacenter.name}
VmFailedRelayoutEventCannot complete relayout {vm.name} on {host.name} in {datacenter.name}: {reason.msg}
VmFailedRelayoutOnVmfs2DatastoreEventCannot complete relayout for virtual machine {vm.name} which has disks on a VMFS2 volume.
VmFailedStartingSecondaryEventvCenter cannot start the Fault Tolerance secondary VM for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Reason: {reason.@enum.VmFailedStartingSecondaryEvent.FailureReason}
VmFailedToPowerOffEventCannot power off {vm.name} on {host.name} in {datacenter.name}: {reason.msg}
VmFailedToPowerOnEventCannot power on {vm.name} on {host.name} in {datacenter.name}. {reason.msg}
VmFailedToRebootGuestEventCannot reboot the guest OS for {vm.name} on {host.name} in {datacenter.name}. {reason.msg}
VmFailedToResetEventCannot suspend {vm.name} on {host.name} in {datacenter.name}: {reason.msg}
VmFailedToShutdownGuestEvent{vm.name} cannot shut down the guest OS on {host.name} in {datacenter.name}: {reason.msg}
VmFailedToStandbyGuestEvent{vm.name} cannot standby the guest OS on {host.name} in {datacenter.name}: {reason.msg}
VmFailedToSuspendEventCannot suspend {vm.name} on {host.name} in {datacenter.name}: {reason.msg}
VmFailedUpdatingSecondaryConfigvCenter cannot update the Fault Tolerance secondary VM configuration for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}
VmFailoverFailedvSphere HA unsuccessfully failed over {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}. vSphere HA will retry if the maximum number of attempts has not been exceeded. Reason: {reason.msg}
VmFaultToleranceStateChangedEventFault Tolerance state of {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} changed from {oldState.@enum.VirtualMachine.FaultToleranceState} to {newState.@enum.VirtualMachine.FaultToleranceState}
VmFaultToleranceTurnedOffEventFault Tolerance protection has been turned off for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}
VmFaultToleranceVmTerminatedEventThe Fault Tolerance VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} has been terminated. {reason.@enum.VmFaultToleranceVmTerminatedEvent.TerminateReason}
VMFSDatastoreCreatedEventCreated VMFS datastore {datastore.name} on {host.name} in {datacenter.name}
VMFSDatastoreExpandedEventExpanded VMFS datastore {datastore.name} on {host.name} in {datacenter.name}
VMFSDatastoreExtendedEventExtended VMFS datastore {datastore.name} on {host.name} in {datacenter.name}
VmGuestOSCrashedEvent{vm.name} on {host.name}: Guest operating system has crashed.
VmGuestRebootEventGuest OS reboot for {vm.name} on {host.name} in {datacenter.name}
VmGuestShutdownEventGuest OS shut down for {vm.name} on {host.name} in {datacenter.name}
VmGuestStandbyEventGuest OS standby for {vm.name} on {host.name} in {datacenter.name}
VmHealthMonitoringStateChangedEventvSphere HA VM monitoring state in {computeResource.name} in {datacenter.name} changed to {state.@enum.DasConfigInfo.VmMonitoringState}
VmInstanceUuidAssignedEventAssign a new instance UUID ({instanceUuid}) to {vm.name}
VmInstanceUuidChangedEventThe instance UUID of {vm.name} has been changed from ({oldInstanceUuid}) to ({newInstanceUuid})
VmInstanceUuidConflictEventThe instance UUID ({instanceUuid}) of {vm.name} conflicts with the instance UUID assigned to {conflictedVm.name}
VmMacAssignedEventNew MAC address ({mac}) assigned to adapter {adapter} for {vm.name}
VmMacChangedEventChanged MAC address from {oldMac} to {newMac} for adapter {adapter} for {vm.name}
VmMacConflictEventThe MAC address ({mac}) of {vm.name} conflicts with MAC assigned to {conflictedVm.name}
VmMaxFTRestartCountReachedvSphere HA stopped trying to restart Secondary VM {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} because the maximum VM restart count was reached
VmMaxRestartCountReachedvSphere HA stopped trying to restart {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} because the maximum VM restart count was reached
VmMessageErrorEventError message on {vm.name} on {host.name} in {datacenter.name}: {message}
VmMessageEventMessage on {vm.name} on {host.name} in {datacenter.name}: {message}
VmMessageWarningEventWarning message on {vm.name} on {host.name} in {datacenter.name}: {message}
VmMigratedEventMigration of virtual machine {vm.name} from {sourceHost.name}, {sourceDatastore.name} to {host.name}, {ds.name} completed
VmNoCompatibleHostForSecondaryEventNo compatible host for the Fault Tolerance secondary VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}
VmNoNetworkAccessEventNot all networks for {vm.name} are accessible by {destHost.name}
VmOrphanedEvent{vm.name} does not exist on {host.name} in {datacenter.name}
VMotionLicenseExpiredEventA vMotion license for {host.name} has expired
VmPoweredOffEvent{vm.name} on {host.name} in {datacenter.name} is powered off
VmPoweredOnEvent{vm.name} on {host.name} in {datacenter.name} is powered on
VmPoweringOnWithCustomizedDVPortEventVirtual machine {vm.name} powered On with vNICs connected to dvPorts that have a port level configuration, which might be different from the dvPort group configuration.
VmPowerOffOnIsolationEventvSphere HA powered off {vm.name} on the isolated host {isolatedHost.name} in cluster {computeResource.name} in {datacenter.name}
VmPrimaryFailoverEventFault Tolerance VM ({vm.name}) failed over to {host.name} in cluster {computeResource.name} in {datacenter.name}. {reason.@enum.VirtualMachine.NeedSecondaryReason}
VmReconfiguredEventReconfigured {vm.name} on {host.name} in {datacenter.name}
VmRegisteredEventRegistered {vm.name} on {host.name} in {datacenter.name}
VmRelayoutSuccessfulEventRelayout of {vm.name} on {host.name} in {datacenter.name} completed
VmRelayoutUpToDateEvent{vm.name} on {host.name} in {datacenter.name} is in the correct format and relayout is not necessary
VmReloadFromPathEvent{vm.name} on {host.name} reloaded from new configuration {configPath}.
VmReloadFromPathFailedEvent{vm.name} on {host.name} could not be reloaded from {configPath}.
VmRelocatedEventCompleted the relocation of the virtual machine
VmRelocateFailedEventCannot relocate virtual machine '{vm.name}' in {datacenter.name}
VmRemoteConsoleConnectedEventRemote console connected to {vm.name} on host {host.name}
VmRemoteConsoleDisconnectedEventRemote console disconnected from {vm.name} on host {host.name}
VmRemovedEventRemoved {vm.name} on {host.name} from {datacenter.name}
VmRenamedEventRenamed {vm.name} from {oldName} to {newName} in {datacenter.name}
VmRequirementsExceedCurrentEVCModeEventFeature requirements of {vm.name} exceed capabilities of {host.name}'s current EVC mode.
VmResettingEvent{vm.name} on {host.name} in {datacenter.name} is reset
VmResourcePoolMovedEventMoved {vm.name} from resource pool {oldParent.name} to {newParent.name} in {datacenter.name}
VmResourceReallocatedEventChanged resource allocation for {vm.name}
VmRestartedOnAlternateHostEventVirtual machine {vm.name} was restarted on {host.name} since {sourceHost.name} failed
VmResumingEvent{vm.name} on {host.name} in {datacenter.name} is resumed
VmSecondaryAddedEventA Fault Tolerance secondary VM has been added for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}
VmSecondaryDisabledBySystemEventvCenter disabled Fault Tolerance on VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} because the Secondary VM could not be powered On.
VmSecondaryDisabledEventDisabled Fault Tolerance secondary VM for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}
VmSecondaryEnabledEventEnabled Fault Tolerance secondary VM for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}
VmSecondaryStartedEventStarted Fault Tolerance secondary VM for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}
VmShutdownOnIsolationEventvSphere HA shut down {vm.name} was shut down on the isolated host {isolatedHost.name} in cluster {computeResource.name} in {datacenter.name}: {shutdownResult.@enum.VmShutdownOnIsolationEvent.Operation}
VmStartingEvent{vm.name} on host {host.name} in {datacenter.name} is starting
VmStartingSecondaryEventStarting Fault Tolerance secondary VM for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}
VmStartRecordingEventStart a recording session on {vm.name}
VmStartReplayingEventStart a replay session on {vm.name}
VmStaticMacConflictEventThe static MAC address ({mac}) of {vm.name} conflicts with MAC assigned to {conflictedVm.name}
VmStoppingEvent{vm.name} on {host.name} in {datacenter.name} is stopping
VmSuspendedEvent{vm.name} on {host.name} in {datacenter.name} is suspended
VmSuspendingEvent{vm.name} on {host.name} in {datacenter.name} is being suspended
VmTimedoutStartingSecondaryEventStarting the Fault Tolerance secondary VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} timed out within {timeout} ms
VmUnsupportedStartingEventUnsupported guest OS {guestId} for {vm.name} on {host.name} in {datacenter.name}
VmUpgradeCompleteEventVirtual machine compatibility upgraded to {version.@enum.vm.hwVersion}
VmUpgradeFailedEventCannot upgrade virtual machine compatibility.
VmUpgradingEventUpgrading virtual machine compatibility of {vm.name} in {datacenter.name} to {version.@enum.vm.hwVersion}
VmUuidAssignedEventAssigned new BIOS UUID ({uuid}) to {vm.name} on {host.name} in {datacenter.name}
VmUuidChangedEventChanged BIOS UUID from {oldUuid} to {newUuid} for {vm.name} on {host.name} in {datacenter.name}
VmUuidConflictEventBIOS ID ({uuid}) of {vm.name} conflicts with that of {conflictedVm.name}
VmVnicPoolReservationViolationClearEventThe reservation violation on the virtual NIC network resource pool {vmVnicResourcePoolName} with key {vmVnicResourcePoolKey} on {dvs.name} is cleared
VmVnicPoolReservationViolationRaiseEventThe reservation allocated to the virtual NIC network resource pool {vmVnicResourcePoolName} with key {vmVnicResourcePoolKey} on {dvs.name} is violated
VmWwnAssignedEventNew WWNs assigned to {vm.name}
VmWwnChangedEventWWNs are changed for {vm.name}
VmWwnConflictEventThe WWN ({wwn}) of {vm.name} conflicts with the currently registered WWN